[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Qemu-devel] [PATCH 05/16] mirror: use bottom half to re-enter coroutine
From: |
Paolo Bonzini |
Subject: |
[Qemu-devel] [PATCH 05/16] mirror: use bottom half to re-enter coroutine |
Date: |
Tue, 16 Feb 2016 18:56:17 +0100 |
mirror is calling bdrv_drain from an AIO callback---more precisely,
the bdrv_drain happens far away from the AIO callback, in the coroutine that
the AIO callback enters.
This used to be okay because bdrv_drain more or less tried to guess
when all AIO callbacks were done; however it will cause a deadlock once
bdrv_drain really checks for AIO callbacks to be complete. The situation
here is admittedly underdefined, and Stefan pointed out that the same
solution is found in many other places in the QEMU block layer, therefore
I think this workaround is acceptable.
Signed-off-by: Paolo Bonzini <address@hidden>
---
block/mirror.c | 23 +++++++++++++++++++----
1 file changed, 19 insertions(+), 4 deletions(-)
diff --git a/block/mirror.c b/block/mirror.c
index 2c0edfa..793c20c 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -71,6 +71,7 @@ typedef struct MirrorOp {
QEMUIOVector qiov;
int64_t sector_num;
int nb_sectors;
+ QEMUBH *co_enter_bh;
} MirrorOp;
static BlockErrorAction mirror_error_action(MirrorBlockJob *s, bool read,
@@ -86,6 +87,18 @@ static BlockErrorAction mirror_error_action(MirrorBlockJob
*s, bool read,
}
}
+static void mirror_bh_cb(void *opaque)
+{
+ MirrorOp *op = opaque;
+ MirrorBlockJob *s = op->s;
+
+ qemu_bh_delete(op->co_enter_bh);
+ g_free(op);
+ if (s->waiting_for_io) {
+ qemu_coroutine_enter(s->common.co, NULL);
+ }
+}
+
static void mirror_iteration_done(MirrorOp *op, int ret)
{
MirrorBlockJob *s = op->s;
@@ -116,11 +129,13 @@ static void mirror_iteration_done(MirrorOp *op, int ret)
}
qemu_iovec_destroy(&op->qiov);
- g_free(op);
- if (s->waiting_for_io) {
- qemu_coroutine_enter(s->common.co, NULL);
- }
+ /* The I/O operation is not finished until the callback returns.
+ * If we call qemu_coroutine_enter here, there is the possibility
+ * of a deadlock when the coroutine calls bdrv_drained_begin.
+ */
+ op->co_enter_bh = qemu_bh_new(mirror_bh_cb, op);
+ qemu_bh_schedule(op->co_enter_bh);
}
static void mirror_write_complete(void *opaque, int ret)
--
2.5.0
- [Qemu-devel] [PATCH 00/16] AioContext fine-grained locking, part 1 of 3, including bdrv_drain rewrite, Paolo Bonzini, 2016/02/16
- [Qemu-devel] [PATCH 01/16] block: make bdrv_start_throttled_reqs return void, Paolo Bonzini, 2016/02/16
- [Qemu-devel] [PATCH 03/16] block: introduce bdrv_no_throttling_begin/end, Paolo Bonzini, 2016/02/16
- [Qemu-devel] [PATCH 02/16] block: move restarting of throttled reqs to block/throttle-groups.c, Paolo Bonzini, 2016/02/16
- [Qemu-devel] [PATCH 04/16] block: plug whole tree at once, introduce bdrv_io_unplugged_begin/end, Paolo Bonzini, 2016/02/16
- [Qemu-devel] [PATCH 07/16] block: change drain to look only at one child at a time, Paolo Bonzini, 2016/02/16
- [Qemu-devel] [PATCH 05/16] mirror: use bottom half to re-enter coroutine,
Paolo Bonzini <=
- [Qemu-devel] [PATCH 06/16] block: add BDS field to count in-flight requests, Paolo Bonzini, 2016/02/16
- [Qemu-devel] [PATCH 08/16] blockjob: introduce .drain callback for jobs, Paolo Bonzini, 2016/02/16
- [Qemu-devel] [PATCH 09/16] block: wait for all pending I/O when doing synchronous requests, Paolo Bonzini, 2016/02/16
- [Qemu-devel] [PATCH 10/16] nfs: replace aio_poll with bdrv_drain, Paolo Bonzini, 2016/02/16
- [Qemu-devel] [PATCH 11/16] sheepdog: disable dataplane, Paolo Bonzini, 2016/02/16
- [Qemu-devel] [PATCH 13/16] block: only call aio_poll from iothread, Paolo Bonzini, 2016/02/16
- [Qemu-devel] [PATCH 15/16] qemu-thread: introduce QemuRecMutex, Paolo Bonzini, 2016/02/16
- [Qemu-devel] [PATCH 12/16] aio: introduce aio_context_in_iothread, Paolo Bonzini, 2016/02/16
- [Qemu-devel] [PATCH 14/16] iothread: release AioContext around aio_poll, Paolo Bonzini, 2016/02/16
- [Qemu-devel] [PATCH 16/16] aio: convert from RFifoLock to QemuRecMutex, Paolo Bonzini, 2016/02/16