qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] RFC: handling "backend too fast" in virtio-net


From: Luigi Rizzo
Subject: Re: [Qemu-devel] RFC: handling "backend too fast" in virtio-net
Date: Mon, 18 Feb 2013 02:12:13 -0800

On Mon, Feb 18, 2013 at 1:50 AM, Stefan Hajnoczi <address@hidden> wrote:
On Fri, Feb 15, 2013 at 11:24:29AM +0100, Stefan Hajnoczi wrote:
> On Thu, Feb 14, 2013 at 07:21:57PM +0100, Luigi Rizzo wrote:
>
> CCed Michael Tsirkin
>
> > virtio-style network devices (where the producer and consumer chase
> > each other through a shared memory block) can enter into a
> > bad operating regime when the consumer is too fast.
> >
> > I am hitting this case right now when virtio is used on top of the
> > netmap/VALE backend that I posted a few weeks ago: what happens is that
> > the backend is so fast that the io thread keeps re-enabling notifications
> > every few packets.  This results, on my test machine, in a throughput of
> > 250-300Kpps (and extremely unstable, oscillating between 200 and 600Kpps).
> >
> > I'd like to get some feedback on the following trivial patch to have
> > the thread keep spinning for a bounded amount of time when the producer
> > is slower than the consumer. This gives a relatively stable throughput
> > between 700 and 800 Kpps (we have something similar in our paravirtualized
> > e1000 driver, which is slightly faster at 900-1100 Kpps).
>
> Did you experiment with tx timer instead of bh?  It seems that
> hw/virtio-net.c has two tx mitigation strategies - the bh approach that
> you've tweaked and a true timer.
>
> It seems you don't really want tx batching but you do want to avoid
> guest->host notifications?

One more thing I forgot: virtio-net does not use ioeventfd by default.
ioeventfd changes the cost of guest->host notifications because the
notification becomes an eventfd signal inside the kernel and kvm.ko then
re-enters the guest.

This means a guest->host notification becomes a light-weight exit and we
don't return from ioctl(KVM_RUN).

Perhaps -device virtio-blk-pci,ioeventfd=on will give similar results to
your patch?
is the ioeventfd the mechanism used by vhostnet ?
If so, Giuseppe Lettieri (in Cc) has tried that with
a modified netmap backend and experienced the same
problem -- making the io thread (user or kernel)
spin a bit more has great benefit on the throughput.

cheers
luigi



reply via email to

[Prev in Thread] Current Thread [Next in Thread]