|
From: | Wei Wang |
Subject: | Re: [Qemu-devel] [virtio-dev] Re: [PATCH v1] virtio-net: enable configurable tx queue size |
Date: | Tue, 13 Jun 2017 14:08:19 +0800 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 |
On 06/13/2017 11:55 AM, Jason Wang wrote:
On 2017年06月13日 11:51, Wei Wang wrote:On 06/13/2017 11:19 AM, Jason Wang wrote:On 2017年06月13日 11:10, Wei Wang wrote:On 06/13/2017 04:43 AM, Michael S. Tsirkin wrote:On Mon, Jun 12, 2017 at 05:30:46PM +0800, Wei Wang wrote:Ping for comments, thanks.This was only posted a week ago, might be a bit too short for some people.OK, sorry for the push.A couple of weeks is more reasonable before you ping. Also, I sent a bunch of comments on Thu, 8 Jun 2017. You should probably address these.I responded to the comments. The main question is that I'm not sure why we need the vhost backend to support VIRTIO_F_MAX_CHAIN_SIZE. IMHO, that should be a feature proposed to solve the possible issue caused by the QEMU implemented backend.The issue is what if there's a mismatch of max #sgs between qemu and vhost?When the vhost backend is used, QEMU is not involved in the data path. The vhost backend directly gets what is offered by the guest from the vq. Why would there be a mismatch of max #sgs between QEMU and vhost, and what is the QEMU side max #sgs used for? Thanks.You need query the backend max #sgs in this case at least. no? If not how do you know the value is supported by the backend? Thanks
Here is my thought: vhost backend has already been supporting 1024 sgs, so I think it might not be necessary to query the max sgs that the vhost backend supports. In the setup phase, when QEMU detects the backend is vhost, it assumes 1024 max sgs is supported, instead of giving an extra call to query. The advantage is that people who is using the vhost backend can upgrade to use 1024 tx queue size by only applying the QEMU patches. Adding an extracall to query the size would need to patch their vhost backend (like vhost-user),
which is difficult for them. Best, Wei
[Prev in Thread] | Current Thread | [Next in Thread] |