qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH v5 0/4] virtio: Use ioeventfd for virtqueue noti


From: Michael S. Tsirkin
Subject: [Qemu-devel] Re: [PATCH v5 0/4] virtio: Use ioeventfd for virtqueue notify
Date: Thu, 6 Jan 2011 19:04:48 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

On Thu, Jan 06, 2011 at 04:41:50PM +0000, Stefan Hajnoczi wrote:
> Here are 4k sequential read results (cache=none) to check whether we
> see an ioeventfd performance regression with virtio-blk.
> 
> The idea is to use a small blocksize with an I/O pattern (sequential
> reads) that is cheap and executes quickly.  Therefore we're doing many
> iops and the cost virtqueue kick/notify is especially important.
> We're not trying to stress the disk, we're trying to make the
> difference in ioeventfd=on/off apparent.
> 
> I did 2 runs for both ioeventfd=off and ioeventfd=on.  The results are
> similar: 1% and 2% degradation in MB/s or iops.  We'd have to do more
> runs to see if the degradation is statistically significant, but the
> percentage value is so low that I'm satisfied.
> 
> Are you happy to merge virtio-ioeventfd v6 + your fixups?

Think so. I would like to do a bit of testing of the whole thing
with migration (ideally with virtio net
and vhost too, even though we don't yet enable them).

Hope to put it on my tree by next week.

> Full results below:
> 
> x86_64-softmmu/qemu-system-x86_64 -m 1024 -drive
> if=none,file=rhel6.img,cache=none,id=system -device
> virtio-blk-pci,drive=system -drive
> if=none,file=/dev/volumes/storage,cache=none,id=storage -device
> virtio-blk-pci,drive=storage -cpu kvm64,+x2apic -vnc :0
> 
> fio jobfile:
> [global]
> ioengine=libaio
> buffered=0
> rw=read
> bs=4k
> iodepth=1
> runtime=2m
> 
> [job1]
> filename=/dev/vdb
> 
> ioeventfd=off:
> job1: (groupid=0, jobs=1): err= 0: pid=2692
>   read : io=2,353MB, bw=20,080KB/s, iops=5,019, runt=120001msec
>     slat (usec): min=20, max=1,424, avg=34.86, stdev= 7.62
>     clat (usec): min=1, max=11,547, avg=162.02, stdev=42.95
>     bw (KB/s) : min=16600, max=20328, per=100.03%, avg=20084.25, stdev=241.88
>   cpu          : usr=1.14%, sys=13.40%, ctx=604918, majf=0, minf=29
>   IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
> >=64=0.0%
>      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
> >=64=0.0%
>      issued r/w: total=602391/0, short=0/0
>      lat (usec): 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
>      lat (usec): 100=0.01%, 250=99.89%, 500=0.07%, 750=0.01%, 1000=0.02%
>      lat (msec): 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%
> 
> Run status group 0 (all jobs):
>    READ: io=2,353MB, aggrb=20,079KB/s, minb=20,561KB/s,
> maxb=20,561KB/s, mint=120001msec, maxt=120001msec
> 
> Disk stats (read/write):
>   vdb: ios=601339/0, merge=0/0, ticks=112092/0, in_queue=111815, util=93.38%
> 
> ioeventfd=on:
> job1: (groupid=0, jobs=1): err= 0: pid=2692
>   read : io=2,299MB, bw=19,619KB/s, iops=4,904, runt=120001msec
>     slat (usec): min=9, max=2,257, avg=40.43, stdev=11.65
>     clat (usec): min=1, max=28,000, avg=161.12, stdev=61.46
>     bw (KB/s) : min=15720, max=19984, per=100.02%, avg=19623.26, stdev=290.76
>   cpu          : usr=1.49%, sys=19.34%, ctx=591398, majf=0, minf=29
>   IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
> >=64=0.0%
>      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
> >=64=0.0%
>      issued r/w: total=588578/0, short=0/0
>      lat (usec): 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
>      lat (usec): 100=0.01%, 250=99.86%, 500=0.09%, 750=0.01%, 1000=0.02%
>      lat (msec): 2=0.01%, 4=0.01%, 10=0.01%, 50=0.01%
> 
> Run status group 0 (all jobs):
>    READ: io=2,299MB, aggrb=19,619KB/s, minb=20,089KB/s,
> maxb=20,089KB/s, mint=120001msec, maxt=120001msec
> 
> Disk stats (read/write):
>   vdb: ios=587592/0, merge=0/0, ticks=110373/0, in_queue=110125, util=91.97%
> 
> Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]