qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/3] virtio: disable notifications in blk and sc


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH 0/3] virtio: disable notifications in blk and scsi
Date: Fri, 18 Nov 2016 15:20:43 +0000
User-agent: Mutt/1.7.1 (2016-10-04)

On Fri, Nov 18, 2016 at 04:21:33PM +0200, Michael S. Tsirkin wrote:
> On Fri, Nov 18, 2016 at 10:58:47AM +0000, Stefan Hajnoczi wrote:
> > On Thu, Nov 17, 2016 at 07:38:45PM +0200, Michael S. Tsirkin wrote:
> > > On Thu, Nov 17, 2016 at 01:27:49PM +0000, Stefan Hajnoczi wrote:
> > > > On Thu, Nov 17, 2016 at 12:17:57AM +0200, Michael S. Tsirkin wrote:
> > > > > On Wed, Nov 16, 2016 at 09:53:06PM +0000, Stefan Hajnoczi wrote:
> > > > > > Disabling notifications during virtqueue processing reduces the 
> > > > > > number of
> > > > > > exits.  The virtio-net device already uses 
> > > > > > virtio_queue_set_notifications() but
> > > > > > virtio-blk and virtio-scsi do not.
> > > > > > 
> > > > > > The following benchmark shows a 15% reduction in virtio-blk-pci 
> > > > > > MMIO exits:
> > > > > > 
> > > > > >   (host)$ qemu-system-x86_64 \
> > > > > >               -enable-kvm -m 1024 -cpu host \
> > > > > >               -drive if=virtio,id=drive0,file=f24.img,format=raw,\
> > > > > >                      cache=none,aio=native
> > > > > >   (guest)$ fio # jobs=4, iodepth=8, direct=1, randread
> > > > > >   (host)$ sudo perf record -a -e kvm:kvm_fast_mmio
> > > > > > 
> > > > > > Number of kvm_fast_mmio events:
> > > > > > Unpatched: 685k
> > > > > > Patched: 592k (-15%, lower is better)
> > > > > 
> > > > > Any chance to see a gain in actual benchmark numbers?
> > > > > This is important to make sure we are not just
> > > > > shifting overhead around.
> > > > 
> > > > Good idea.  I reran this morning without any tracing and compared
> > > > against bare metal.
> > > > 
> > > > Total reads for a 30-second 4 KB random read benchmark with 4 processes
> > > > x iodepth=8:
> > > > 
> > > > Bare metal: 26440 MB
> > > > Unpatched:  19799 MB
> > > > Patched:    21252 MB
> > > > 
> > > > Patched vs Unpatched: +7% improvement
> > > > Patched vs Bare metal: 20% virtualization overhead
> > > > 
> > > > The disk image is a 8 GB raw file on XFS on LVM on dm-crypt on a Samsung
> > > > MZNLN256HCHP 256 GB SATA SSD.  This is just my laptop.
> > > > 
> > > > Seems like a worthwhile improvement to me.
> > > > 
> > > > Stefan
> > > 
> > > Sure. Pls remember to ping or re-post after the release.
> > 
> > How about a -next tree?
> 
> -next would make sense if we did Linus style short merge
> cycles followed by a long stabilization period.
> 
> With current QEMU style -next seems counter-productive, we do freezes in
> particular so people focus on stabilization, with -next everyone except
> maintainers just keeps going as usual, and maintainers must handle
> double the load.
> 
> > I've found that useful for block, net, and tracing in the past.  Most of
> > the time it means patch authors can rest assured their patches will be
> > merged without further action.  It allows development of features that
> > depend on out-of-tree patches.
> > 
> > Stefan
> 
> Less work for authors, more work for me ... I'd rather distribute the load.

Okay.

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]