[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH] block: Fix bdrv_drain in coroutine
From: |
Laurent Vivier |
Subject: |
Re: [Qemu-devel] [PATCH] block: Fix bdrv_drain in coroutine |
Date: |
Fri, 1 Apr 2016 14:05:22 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.7.1 |
On 01/04/2016 11:46, Fam Zheng wrote:
> Using the nested aio_poll() in coroutine is a bad idea. This patch
> replaces the aio_poll loop in bdrv_drain with a BH, if called in
> coroutine.
>
> For example, the bdrv_drain() in mirror.c can hang when a guest issued
> request is pending on it in qemu_co_mutex_lock().
>
> Mirror coroutine in this case has just finished a request, and the block
> job is about to complete. It calls bdrv_drain() which waits for the
> other coroutine to complete. The other coroutine is a scsi-disk request.
> The deadlock happens when the latter is in turn pending on the former to
> yield/terminate, qemu_co_mutex_lock(). The state flow is as below
> (assuming a qcow2 image):
>
> mirror coroutine scsi-disk coroutine
> -------------------------------------------------------------
> do last write
>
> qcow2:qemu_co_mutex_lock()
> ...
> scsi disk read
>
> tracked request begin
>
> qcow2:qemu_co_mutex_lock.enter
>
> qcow2:qemu_co_mutex_unlock()
>
> bdrv_drain
> while (has tracked request)
> aio_poll()
>
> In the scsi-disk coroutine, the qemu_co_mutex_lock() will never return
> because the mirror coroutine is blocked in the aio_poll(blocking=true).
>
> Reported-by: Laurent Vivier <address@hidden>
I've checked this fixes the problem I have reported (on ppc64 and
x86_64), and it does.
Thanks Fam,
Laurent
> Suggested-by: Paolo Bonzini <address@hidden>
> Signed-off-by: Fam Zheng <address@hidden>
> ---
> block/io.c | 40 ++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 40 insertions(+)
>
> diff --git a/block/io.c b/block/io.c
> index c4869b9..d0a4551 100644
> --- a/block/io.c
> +++ b/block/io.c
> @@ -253,6 +253,42 @@ static void bdrv_drain_recurse(BlockDriverState *bs)
> }
> }
>
> +typedef struct {
> + Coroutine *co;
> + BlockDriverState *bs;
> + bool done;
> +} BdrvCoDrainData;
> +
> +static void bdrv_co_drain_bh_cb(void *opaque)
> +{
> + BdrvCoDrainData *data = opaque;
> + Coroutine *co = data->co;
> +
> + bdrv_drain(data->bs);
> + data->done = true;
> + qemu_coroutine_enter(co, NULL);
> +}
> +
> +static void coroutine_fn bdrv_co_drain(BlockDriverState *bs)
> +{
> + QEMUBH *bh;
> + BdrvCoDrainData data;
> +
> + assert(qemu_in_coroutine());
> + data = (BdrvCoDrainData) {
> + .co = qemu_coroutine_self(),
> + .bs = bs,
> + .done = false,
> + };
> + bh = aio_bh_new(bdrv_get_aio_context(bs), bdrv_co_drain_bh_cb, &data),
> + qemu_bh_schedule(bh);
> +
> + do {
> + qemu_coroutine_yield();
> + } while (!data.done);
> + qemu_bh_delete(bh);
> +}
> +
> /*
> * Wait for pending requests to complete on a single BlockDriverState
> subtree,
> * and suspend block driver's internal I/O until next request arrives.
> @@ -269,6 +305,10 @@ void bdrv_drain(BlockDriverState *bs)
> bool busy = true;
>
> bdrv_drain_recurse(bs);
> + if (qemu_in_coroutine()) {
> + bdrv_co_drain(bs);
> + return;
> + }
> while (busy) {
> /* Keep iterating */
> bdrv_flush_io_queue(bs);
>