qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH v5 0/4] virtio: Use ioeventfd for virtqueue noti


From: Michael S. Tsirkin
Subject: [Qemu-devel] Re: [PATCH v5 0/4] virtio: Use ioeventfd for virtqueue notify
Date: Mon, 13 Dec 2010 20:52:51 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

On Mon, Dec 13, 2010 at 05:57:28PM +0000, Stefan Hajnoczi wrote:
> On Mon, Dec 13, 2010 at 4:28 PM, Stefan Hajnoczi <address@hidden> wrote:
> > On Mon, Dec 13, 2010 at 4:12 PM, Michael S. Tsirkin <address@hidden> wrote:
> >> On Mon, Dec 13, 2010 at 03:27:06PM +0000, Stefan Hajnoczi wrote:
> >>> On Mon, Dec 13, 2010 at 1:36 PM, Michael S. Tsirkin <address@hidden> 
> >>> wrote:
> >>> > On Mon, Dec 13, 2010 at 03:35:38PM +0200, Michael S. Tsirkin wrote:
> >>> >> On Mon, Dec 13, 2010 at 01:11:27PM +0000, Stefan Hajnoczi wrote:
> >>> >> > Fresh results:
> >>> >> >
> >>> >> > 192.168.0.1 - host (runs netperf)
> >>> >> > 192.168.0.2 - guest (runs netserver)
> >>> >> >
> >>> >> > host$ src/netperf -H 192.168.0.2 -- -m 200
> >>> >> >
> >>> >> > ioeventfd=on
> >>> >> > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.0.2
> >>> >> > (192.168.0.2) port 0 AF_INET
> >>> >> > Recv   Send    Send
> >>> >> > Socket Socket  Message  Elapsed
> >>> >> > Size   Size    Size     Time     Throughput
> >>> >> > bytes  bytes   bytes    secs.    10^6bits/sec
> >>> >> >  87380  16384    200    10.00    1759.25
> >>> >> >
> >>> >> > ioeventfd=off
> >>> >> > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.0.2
> >>> >> > (192.168.0.2) port 0 AF_INET
> >>> >> > Recv   Send    Send
> >>> >> > Socket Socket  Message  Elapsed
> >>> >> > Size   Size    Size     Time     Throughput
> >>> >> > bytes  bytes   bytes    secs.    10^6bits/sec
> >>> >> >
> >>> >> >  87380  16384    200    10.00    1757.15
> >>> >> >
> >>> >> > The results vary approx +/- 3% between runs.
> >>> >> >
> >>> >> > Invocation:
> >>> >> > $ x86_64-softmmu/qemu-system-x86_64 -m 4096 -enable-kvm -netdev
> >>> >> > type=tap,id=net0,ifname=tap0,script=no,downscript=no -device
> >>> >> > virtio-net-pci,netdev=net0,ioeventfd=on|off -vnc :0 -drive
> >>> >> > if=virtio,cache=none,file=$HOME/rhel6-autobench-raw.img
> >>> >> >
> >>> >> > I am running qemu.git with v5 patches, based off
> >>> >> > 36888c6335422f07bbc50bf3443a39f24b90c7c6.
> >>> >> >
> >>> >> > Host:
> >>> >> > 1 Quad-Core AMD Opteron(tm) Processor 2350 @ 2 GHz
> >>> >> > 8 GB RAM
> >>> >> > RHEL 6 host
> >>> >> >
> >>> >> > Next I will try the patches on latest qemu-kvm.git
> >>> >> >
> >>> >> > Stefan
> >>> >>
> >>> >> One interesting thing is that I put virtio-net earlier on
> >>> >> command line.
> >>> >
> >>> > Sorry I mean I put it after disk, you put it before.
> >>>
> >>> I can't find a measurable difference when swapping -drive and -netdev.
> >>
> >> One other concern I have is that we are apparently using
> >> ioeventfd for all VQs. E.g. for virtio-net we probably should not
> >> use it for the control VQ - it's a waste of resources.
> >
> > One option is a per-device (block, net, etc) bitmap that masks out
> > virtqueues.  Is that something you'd like to see?
> >
> > I'm tempted to mask out the RX vq too and see how that affects the
> > qemu-kvm.git specific issue.
> 
> As expected, the rx virtqueue is involved in the degradation.  I
> enabled ioeventfd only for the TX virtqueue and got the same good
> results as userspace virtio-net.
> 
> When I enable only the rx virtqueue, performs decreases as we've seen above.
> 
> Stefan

Interesting. In particular this implies something's wrong with the
queue: we should not normally be getting notifications from rx queue
at all. Is it running low on buffers? Does it help to increase the vq
size?  Any other explanation?

-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]