qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 00/13] dataplane: use block layer


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [PATCH v2 00/13] dataplane: use block layer
Date: Mon, 15 Jul 2013 19:05:50 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130625 Thunderbird/17.0.7

Il 15/07/2013 16:42, Stefan Hajnoczi ha scritto:
> v2:
>  * Rebased onto qemu.git/master
>  * Added comment explaining how the dataplane thread is restarted after 
> draining [pbonzini]
> 
> This series adds image format, QMP 'transation', and QMP 'block_resize' 
> support
> to dataplane.  This is done by replacing custom Linux AIO code with QEMU block
> layer APIs.
> 
> In order for block layer APIs to be safe, bdrv_drain_all() is modified to be
> aware of device emulation threads (like dataplane).
> BlockDevOps->drain_threads_cb() is introduced as a way to stop device 
> emulation
> threads temporarily.  Device emulation threads may resume after this main loop
> iteration completes, which is enough to allow code running under the QEMU
> global mutex a chance to use the block device temporarily.
> 
> bdrv_get_aio_context() is dropped in favor of a thread-local AioContext
> pointer.  This allows qemu_bh_new(), qemu_aio_wait(), and friends to
> automatically use the right AioContext.  When we introduce a BlockDriverState
> lock in the future, it will be possible to use block devices from multiple
> threads arbitrarily (right now we only allow it if bdrv_drain_all() is used).
> 
> Three open issues:
> 
>  * Timers only work in the main loop, so I/O throttling is ignored and QED is
>    unsafe with x-data-plane=on.  I am tackling this in a separate series that
>    makes QEMUTimer thread-safe and can then re-enable I/O throttling.
> 
>  * Peformance analysis in progress.
> 
>  * Need to test with NBD, Gluster, and other non-file protocols.

Since this is for 1.6, I am still not sure this is the right approach
(especially a block device lock scares me a bit...).

Quoting a private mail from Kevin (I guess he doesn't mind), the
possible approaches include:

* Stopping the dataplane event loop while another user accesses the
BlockDriverState

* Make other users use Bottom Halves to do a context switch into the
dataplane event loop and do their operation from there

* Add locks the BlockDriverState and just do the accesses from different
threads

and I believe a mix of these would be the way to go.

Stopping the dataplane event loop is not really necessary for
correctness; that only requires stopping the ioeventfd, and indeed this
series already includes patches for this.  So you definitely have my
"approval" (in quotes because you're the maintainer...) for patches 1-2-3.

However, I'm not sure it is feasible to have a single AioContext per
thread.  Consider a backup job whose target is not running in the same
AioContext as the source; for optimization it might use bdrv_aio_*
instead of coroutine-based I/O.

So, once you have stopped the ioeventfd to correctly support
bdrv_drain_all(), I would like to see code that makes other threads use
bottom halves to do a context switch.  Ping Fan's patches for a
thread-safe BH list are useful for this.

I think there are two ways to implement synchronous operations, there
are two approaches:

* A lock, just like Kevin mentioned (though I would place it on the
AioContext, not the BlockDriverState).  Then you can call aio_wait under
the lock in bdrv_read and friends.

* A "complementary" bottom half that is scheduled in the
qemu_aio_context.  This second bottom half terminates processing in the
dataplane thread and restarts the thread starting the synchronous
operation.  I'm not sure how easy it would be to write infrastructure
for this, but basically it'd mean adding a third path ("in coroutine
context, but in the wrong AioContext") to the current "if
(qemu_in_coroutine())" tests that block.c has.

Paolo


> Paolo Bonzini (2):
>   exec: do not use qemu/tls.h
>   qemu-thread: add TLS wrappers
> 
> Stefan Hajnoczi (11):
>   dataplane: sync virtio.c and vring.c virtqueue state
>   block: add BlockDevOps->drain_threads_cb()
>   virtio-blk: implement BlockDevOps->drain_threads_cb()
>   block: add thread_aio_context TLS variable
>   block: drop bdrv_get_aio_context()
>   main-loop: use thread-local AioContext
>   linux-aio: bind EventNotifier to current AioContext
>   block: disable I/O throttling outside main loop
>   dataplane: use block layer for I/O
>   dataplane: drop ioq Linux AIO request queue
>   block: drop raw_get_aio_fd()
> 
>  async.c                             |   2 +
>  block.c                             |  17 ++-
>  block/linux-aio.c                   |  19 ++-
>  block/raw-posix.c                   |  38 +----
>  block/raw-win32.c                   |   2 +-
>  configure                           |  21 +++
>  cpus.c                              |   2 +
>  exec.c                              |  10 +-
>  hw/block/dataplane/Makefile.objs    |   2 +-
>  hw/block/dataplane/ioq.c            | 117 ---------------
>  hw/block/dataplane/ioq.h            |  57 -------
>  hw/block/dataplane/virtio-blk.c     | 290 
> ++++++++++++------------------------
>  hw/block/virtio-blk.c               |  16 ++
>  hw/virtio/dataplane/vring.c         |   8 +-
>  include/block/aio.h                 |   4 +
>  include/block/block.h               |  17 ++-
>  include/block/block_int.h           |   7 -
>  include/hw/virtio/dataplane/vring.h |   2 +-
>  include/qemu/tls.h                  | 125 +++++++++++++---
>  include/qom/cpu.h                   |  14 +-
>  main-loop.c                         |  16 +-
>  tests/Makefile                      |   3 +
>  tests/test-tls.c                    |  87 +++++++++++
>  util/qemu-thread-win32.c            |  17 +++
>  24 files changed, 426 insertions(+), 467 deletions(-)
>  delete mode 100644 hw/block/dataplane/ioq.c
>  delete mode 100644 hw/block/dataplane/ioq.h
>  create mode 100644 tests/test-tls.c
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]