qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [virtio-dev] Re: [PATCH v3 1/2] virtio-net: enable conf


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [virtio-dev] Re: [PATCH v3 1/2] virtio-net: enable configurable tx queue size
Date: Tue, 27 Jun 2017 00:21:47 +0300

On Mon, Jun 26, 2017 at 06:34:25PM +0800, Wei Wang wrote:
> On 06/26/2017 04:05 PM, Jason Wang wrote:
> > 
> > 
> > On 2017年06月26日 12:55, Wei Wang wrote:
> > > On 06/26/2017 11:18 AM, Jason Wang wrote:
> > > > 
> > > > On 2017年06月23日 10:32, Wei Wang wrote:
> > > > > This patch enables the virtio-net tx queue size to be configurable
> > > > > between 256 (the default queue size) and 1024 by the user when the
> > > > > vhost-user backend is used.
> > > > > 
> > > > > Currently, the maximum tx queue size for other backends is 512 due
> > > > > to the following limitations:
> > > > > - QEMU backend: the QEMU backend implementation in some cases may
> > > > > send 1024+1 iovs to writev.
> > > > > - Vhost_net backend: there are possibilities that the guest sends
> > > > > a vring_desc of memory which corsses a MemoryRegion thereby
> > > > > generating more than 1024 iovs after translattion from guest-physical
> > > > > address in the backend.
> > > > > 
> > > > > Signed-off-by: Wei Wang <address@hidden>
> > > > > ---
> > > > >   hw/net/virtio-net.c            | 45
> > > > > +++++++++++++++++++++++++++++++++---------
> > > > >   include/hw/virtio/virtio-net.h |  1 +
> > > > >   2 files changed, 37 insertions(+), 9 deletions(-)
> > > > > 
> > > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > > > index 91eddaf..d13ca60 100644
> > > > > --- a/hw/net/virtio-net.c
> > > > > +++ b/hw/net/virtio-net.c
> > > > > @@ -34,8 +34,11 @@
> > > > >     /* previously fixed value */
> > > > >   #define VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE 256
> > > > > +
> > > > > +    /*
> > > > > +     * Currently, backends other than vhost-user don't
> > > > > support 1024 queue
> > > > > +     * size.
> > > > > +     */
> > > > > +    if (n->net_conf.tx_queue_size == VIRTQUEUE_MAX_SIZE &&
> > > > > +        nc->peer->info->type != NET_CLIENT_DRIVER_VHOST_USER) {
> > > > > +        n->net_conf.tx_queue_size = VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE;
> > > > 
> > > > Do we really want assume all vhost-user backend support 1024
> > > > queue size?
> > > > 
> > > 
> > >  Do you know any vhost-user backend implementation that doesn't support
> > > 1024 tx queue size?
> > > 
> > > 
> > 
> > I don't but the issue is vhost-user uditis an open protocol, there could
> > be even closed source implementation on this. So we can't audit all
> > kinds of backends.
> > 
> 
> This is the discussion left before. Here is my thought:
> 
> Until now, nobody is officially using 1024 tx queue size, including the
> unknown
> vhost-user backend implementation. So, applying this patch won't affect any
> already deployed products.
> 
> If someday, people whose products are based on the unknown vhost-user
> implementation change to try with 1024 tx queue size and apply this patch,
> and find 1024 doesn't work for them, they can simply set the value to 512
> (which
> is better than the 256 size they were using) or apply the 2nd patch entitled
> "VIRTIO_F_MAX_CHAIN_SIZE" to use 1024 queue size.
> 
> I didn't see any big issue here. Would you see any issues?
> 
> @Michael, what is your opinion?

Since the default does not change, I think it's fine.
You need to both have a non-default value and be using
a backend that does not support the 1024 size.
Seems unlikely to me.


> 
> > > 
> > > > > +    }
> > > > > +
> > > > > +    for (i = 0; i < n->max_queues; i++) {
> > > > > +        virtio_net_add_queue(n, i);
> > > > > +    }
> > > > 
> > > > Any reason to move virtio_net_add_queue() here?
> > > > 
> > > 
> > > Please check the whole init steps.  It was moved here (after
> > > qemu_new_nic())
> > > to make sure nc->peer is not NULL.
> > 
> > I don't get here, can't you get peer just from nic_conf.peers?
> > 
> 
> Correct, nic_conf.peers also works. Thanks.
> 
> Best,
> Wei
> 
> 
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]