qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode
Date: Mon, 14 Nov 2016 16:53:34 +0000
User-agent: Mutt/1.7.1 (2016-10-04)

On Mon, Nov 14, 2016 at 03:51:18PM +0100, Christian Borntraeger wrote:
> On 11/09/2016 06:13 PM, Stefan Hajnoczi wrote:
> > Recent performance investigation work done by Karl Rister shows that the
> > guest->host notification takes around 20 us.  This is more than the 
> > "overhead"
> > of QEMU itself (e.g. block layer).
> > 
> > One way to avoid the costly exit is to use polling instead of notification.
> > The main drawback of polling is that it consumes CPU resources.  In order to
> > benefit performance the host must have extra CPU cycles available on 
> > physical
> > CPUs that aren't used by the guest.
> > 
> > This is an experimental AioContext polling implementation.  It adds a 
> > polling
> > callback into the event loop.  Polling functions are implemented for 
> > virtio-blk
> > virtqueue guest->host kick and Linux AIO completion.
> > 
> > The QEMU_AIO_POLL_MAX_NS environment variable sets the number of 
> > nanoseconds to
> > poll before entering the usual blocking poll(2) syscall.  Try setting this
> > variable to the time from old request completion to new virtqueue kick.
> > 
> > By default no polling is done.  The QEMU_AIO_POLL_MAX_NS must be set to get 
> > any
> > polling!
> > 
> > Karl: I hope you can try this patch series with several QEMU_AIO_POLL_MAX_NS
> > values.  If you don't find a good value we should double-check the tracing 
> > data
> > to see if this experimental code can be improved.
> > 
> > Stefan Hajnoczi (3):
> >   aio-posix: add aio_set_poll_handler()
> >   virtio: poll virtqueues for new buffers
> >   linux-aio: poll ring for completions
> > 
> >  aio-posix.c         | 133 
> > ++++++++++++++++++++++++++++++++++++++++++++++++++++
> >  block/linux-aio.c   |  17 +++++++
> >  hw/virtio/virtio.c  |  19 ++++++++
> >  include/block/aio.h |  16 +++++++
> >  4 files changed, 185 insertions(+)
> 
> Hmm, I see all affected threads using more CPU power, but the performance 
> numbers are
> somewhat inconclusive on s390. I have no proper test setup (only a shared 
> LPAR), but
> all numbers are in the same ballpark of 3-5Gbyte/sec for 5 disks for 4k 
> random reads
> with iodepth=8.
> 
> What I find interesting is that the guest still does a huge amount of exits 
> for the
> guest->host notifications. I think if we could combine this with some 
> notification
> suppression, then things could be even more interesting.

Great idea.  I'll add that to the next revision.

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]