qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC v9 00/27] virtio: virtio-blk data plane


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [RFC v9 00/27] virtio: virtio-blk data plane
Date: Wed, 18 Jul 2012 18:43:23 +0300

On Wed, Jul 18, 2012 at 04:07:27PM +0100, Stefan Hajnoczi wrote:
> This series implements a dedicated thread for virtio-blk processing using 
> Linux
> AIO for raw image files only.  It is based on qemu-kvm.git a0bc8c3 and 
> somewhat
> old but I wanted to share it on the list since it has been mentioned on 
> mailing
> lists and IRC recently.
> 
> These patches can be used for benchmarking and discussion about how to improve
> block performance.  Paolo Bonzini has also worked in this area and might want
> to share his patches.
> 
> The basic approach is:
> 1. Each virtio-blk device has a thread dedicated to handling ioeventfd
>    signalling when the guest kicks the virtqueue.
> 2. Requests are processed without going through the QEMU block layer using
>    Linux AIO directly.
> 3. Completion interrupts are injected via ioctl from the dedicated thread.
> 
> The series also contains request merging as a bdrv_aio_multiwrite() 
> equivalent.
> This was only to get a comparison against the QEMU block layer and I would 
> drop
> it for other types of analysis.
> 
> The effect of this series is that O_DIRECT Linux AIO on raw files can bypass
> the QEMU global mutex and block layer.  This means higher performance.

Do you have any numbers at all?

> A cleaned up version of this approach could be added to QEMU as a raw O_DIRECT
> Linux AIO fast path.  Image file formats, protocols, and other block layer
> features are not supported by virtio-blk-data-plane.
> 
> Git repo:
> http://repo.or.cz/w/qemu-kvm/stefanha.git/shortlog/refs/heads/virtio-blk-data-plane
> 
> Stefan Hajnoczi (27):
>   virtio-blk: Remove virtqueue request handling code
>   virtio-blk: Set up host notifier for data plane
>   virtio-blk: Data plane thread event loop
>   virtio-blk: Map vring
>   virtio-blk: Do cheapest possible memory mapping
>   virtio-blk: Take PCI memory range into account
>   virtio-blk: Put dataplane code into its own directory
>   virtio-blk: Read requests from the vring
>   virtio-blk: Add Linux AIO queue
>   virtio-blk: Stop data plane thread cleanly
>   virtio-blk: Indirect vring and flush support
>   virtio-blk: Add workaround for BUG_ON() dependency in virtio_ring.h
>   virtio-blk: Increase max requests for indirect vring
>   virtio-blk: Use pthreads instead of qemu-thread
>   notifier: Add a function to set the notifier
>   virtio-blk: Kick data plane thread using event notifier set
>   virtio-blk: Use guest notifier to raise interrupts
>   virtio-blk: Call ioctl() directly instead of irqfd
>   virtio-blk: Disable guest->host notifies while processing vring
>   virtio-blk: Add ioscheduler to detect mergable requests
>   virtio-blk: Add basic request merging
>   virtio-blk: Fix request merging
>   virtio-blk: Stub out SCSI commands
>   virtio-blk: fix incorrect length
>   msix: fix irqchip breakage in msix_try_notify_from_thread()
>   msix: use upstream kvm_irqchip_set_irq()
>   virtio-blk: add EVENT_IDX support to dataplane
> 
>  event_notifier.c          |    7 +
>  event_notifier.h          |    1 +
>  hw/dataplane/event-poll.h |  116 +++++++
>  hw/dataplane/ioq.h        |  128 ++++++++
>  hw/dataplane/iosched.h    |   97 ++++++
>  hw/dataplane/vring.h      |  334 ++++++++++++++++++++
>  hw/msix.c                 |   15 +
>  hw/msix.h                 |    1 +
>  hw/virtio-blk.c           |  753 
> +++++++++++++++++++++------------------------
>  hw/virtio-pci.c           |    8 +
>  hw/virtio.c               |    9 +
>  hw/virtio.h               |    3 +
>  12 files changed, 1074 insertions(+), 398 deletions(-)
>  create mode 100644 hw/dataplane/event-poll.h
>  create mode 100644 hw/dataplane/ioq.h
>  create mode 100644 hw/dataplane/iosched.h
>  create mode 100644 hw/dataplane/vring.h
> 
> -- 
> 1.7.10.4



reply via email to

[Prev in Thread] Current Thread [Next in Thread]