[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Qemu-devel] [PULL 16/35] AioContext: export and use aio_dispatch
From: |
Stefan Hajnoczi |
Subject: |
[Qemu-devel] [PULL 16/35] AioContext: export and use aio_dispatch |
Date: |
Fri, 29 Aug 2014 17:29:44 +0100 |
From: Paolo Bonzini <address@hidden>
So far, aio_poll's scheme was dispatch/poll/dispatch, where
the first dispatch phase was used only in the GSource case in
order to avoid a blocking poll. Earlier patches changed it to
dispatch/prepare/poll/dispatch, where prepare is aio_compute_timeout.
By making aio_dispatch public, we can remove the first dispatch
phase altogether, so that both aio_poll and the GSource use the same
prepare/poll/dispatch scheme.
This patch breaks the invariant that aio_poll(..., true) will not block
the first time it returns false. This used to be fundamental for
qemu_aio_flush's implementation as "while (qemu_aio_wait()) {}" but
no code in QEMU relies on this invariant anymore. The return value
of aio_poll() is now comparable with that of g_main_context_iteration.
Signed-off-by: Paolo Bonzini <address@hidden>
Signed-off-by: Stefan Hajnoczi <address@hidden>
---
aio-posix.c | 55 +++++++++++++----------------------------------------
aio-win32.c | 31 ++++--------------------------
async.c | 2 +-
include/block/aio.h | 6 ++++++
4 files changed, 24 insertions(+), 70 deletions(-)
diff --git a/aio-posix.c b/aio-posix.c
index 798a3ff..0936b4f 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -119,12 +119,21 @@ bool aio_pending(AioContext *ctx)
return false;
}
-static bool aio_dispatch(AioContext *ctx)
+bool aio_dispatch(AioContext *ctx)
{
AioHandler *node;
bool progress = false;
/*
+ * If there are callbacks left that have been queued, we need to call them.
+ * Do not call select in this case, because it is possible that the caller
+ * does not need a complete flush (as is the case for aio_poll loops).
+ */
+ if (aio_bh_poll(ctx)) {
+ progress = true;
+ }
+
+ /*
* We have to walk very carefully in case aio_set_fd_handler is
* called while we're walking.
*/
@@ -184,22 +193,9 @@ bool aio_poll(AioContext *ctx, bool blocking)
/* aio_notify can avoid the expensive event_notifier_set if
* everything (file descriptors, bottom halves, timers) will
- * be re-evaluated before the next blocking poll(). This happens
- * in two cases:
- *
- * 1) when aio_poll is called with blocking == false
- *
- * 2) when we are called after poll(). If we are called before
- * poll(), bottom halves will not be re-evaluated and we need
- * aio_notify() if blocking == true.
- *
- * The first aio_dispatch() only does something when AioContext is
- * running as a GSource, and in that case aio_poll is used only
- * with blocking == false, so this optimization is already quite
- * effective. However, the code is ugly and should be restructured
- * to have a single aio_dispatch() call. To do this, we need to
- * reorganize aio_poll into a prepare/poll/dispatch model like
- * glib's.
+ * be re-evaluated before the next blocking poll(). This is
+ * already true when aio_poll is called with blocking == false;
+ * if blocking == true, it is only true after poll() returns.
*
* If we're in a nested event loop, ctx->dispatching might be true.
* In that case we can restore it just before returning, but we
@@ -207,26 +203,6 @@ bool aio_poll(AioContext *ctx, bool blocking)
*/
aio_set_dispatching(ctx, !blocking);
- /*
- * If there are callbacks left that have been queued, we need to call them.
- * Do not call select in this case, because it is possible that the caller
- * does not need a complete flush (as is the case for aio_poll loops).
- */
- if (aio_bh_poll(ctx)) {
- blocking = false;
- progress = true;
- }
-
- /* Re-evaluate condition (1) above. */
- aio_set_dispatching(ctx, !blocking);
- if (aio_dispatch(ctx)) {
- progress = true;
- }
-
- if (progress && !blocking) {
- goto out;
- }
-
ctx->walking_handlers++;
g_array_set_size(ctx->pollfds, 0);
@@ -264,15 +240,10 @@ bool aio_poll(AioContext *ctx, bool blocking)
/* Run dispatch even if there were no readable fds to run timers */
aio_set_dispatching(ctx, true);
- if (aio_bh_poll(ctx)) {
- progress = true;
- }
-
if (aio_dispatch(ctx)) {
progress = true;
}
-out:
aio_set_dispatching(ctx, was_dispatching);
return progress;
}
diff --git a/aio-win32.c b/aio-win32.c
index 2ac38a8..1ec434a 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -130,11 +130,12 @@ static bool aio_dispatch_handlers(AioContext *ctx, HANDLE
event)
return progress;
}
-static bool aio_dispatch(AioContext *ctx)
+bool aio_dispatch(AioContext *ctx)
{
bool progress;
- progress = aio_dispatch_handlers(ctx, INVALID_HANDLE_VALUE);
+ progress = aio_bh_poll(ctx);
+ progress |= aio_dispatch_handlers(ctx, INVALID_HANDLE_VALUE);
progress |= timerlistgroup_run_timers(&ctx->tlg);
return progress;
}
@@ -149,23 +150,6 @@ bool aio_poll(AioContext *ctx, bool blocking)
progress = false;
- /*
- * If there are callbacks left that have been queued, we need to call then.
- * Do not call select in this case, because it is possible that the caller
- * does not need a complete flush (as is the case for aio_poll loops).
- */
- if (aio_bh_poll(ctx)) {
- blocking = false;
- progress = true;
- }
-
- /* Dispatch any pending callbacks from the GSource. */
- progress |= aio_dispatch(ctx);
-
- if (progress && !blocking) {
- return true;
- }
-
ctx->walking_handlers++;
/* fill fd sets */
@@ -205,14 +189,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
events[ret - WAIT_OBJECT_0] = events[--count];
}
- if (blocking) {
- /* Run the timers a second time. We do this because otherwise aio_wait
- * will not note progress - and will stop a drain early - if we have
- * a timer that was not ready to run entering g_poll but is ready
- * after g_poll. This will only do anything if a timer has expired.
- */
- progress |= timerlistgroup_run_timers(&ctx->tlg);
- }
+ progress |= timerlistgroup_run_timers(&ctx->tlg);
return progress;
}
diff --git a/async.c b/async.c
index 09e09c6..293a52a 100644
--- a/async.c
+++ b/async.c
@@ -213,7 +213,7 @@ aio_ctx_dispatch(GSource *source,
AioContext *ctx = (AioContext *) source;
assert(callback == NULL);
- aio_poll(ctx, false);
+ aio_dispatch(ctx);
return true;
}
diff --git a/include/block/aio.h b/include/block/aio.h
index 05b531c..7ba3e96 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -211,6 +211,12 @@ void qemu_bh_delete(QEMUBH *bh);
*/
bool aio_pending(AioContext *ctx);
+/* Dispatch any pending callbacks from the GSource attached to the AioContext.
+ *
+ * This is used internally in the implementation of the GSource.
+ */
+bool aio_dispatch(AioContext *ctx);
+
/* Progress in completing AIO work to occur. This can issue new pending
* aio as a result of executing I/O completion or bh callbacks.
*
--
1.9.3
- [Qemu-devel] [PULL 08/35] qapi: add read-pattern enum for quorum, (continued)
- [Qemu-devel] [PULL 08/35] qapi: add read-pattern enum for quorum, Stefan Hajnoczi, 2014/08/29
- [Qemu-devel] [PULL 10/35] coroutine: Drop co_sleep_ns, Stefan Hajnoczi, 2014/08/29
- [Qemu-devel] [PULL 09/35] block/quorum: add simple read pattern support, Stefan Hajnoczi, 2014/08/29
- [Qemu-devel] [PULL 11/35] blockdev: fix drive-mirror 'granularity' error message, Stefan Hajnoczi, 2014/08/29
- [Qemu-devel] [PULL 12/35] AioContext: take bottom halves into account when computing aio_poll timeout, Stefan Hajnoczi, 2014/08/29
- [Qemu-devel] [PULL 07/35] sheepdog: improve error handling for a case of failed lock, Stefan Hajnoczi, 2014/08/29
- [Qemu-devel] [PULL 13/35] aio-win32: Evaluate timers after handles, Stefan Hajnoczi, 2014/08/29
- [Qemu-devel] [PULL 06/35] sheepdog: adopting protocol update for VDI locking, Stefan Hajnoczi, 2014/08/29
- [Qemu-devel] [PULL 14/35] aio-win32: Factor out duplicate code into aio_dispatch_handlers, Stefan Hajnoczi, 2014/08/29
- [Qemu-devel] [PULL 15/35] AioContext: run bottom halves after polling, Stefan Hajnoczi, 2014/08/29
- [Qemu-devel] [PULL 16/35] AioContext: export and use aio_dispatch,
Stefan Hajnoczi <=
- [Qemu-devel] [PULL 17/35] test-aio: test timers on Windows too, Stefan Hajnoczi, 2014/08/29
- [Qemu-devel] [PULL 18/35] aio-win32: add aio_set_dispatching optimization, Stefan Hajnoczi, 2014/08/29
- [Qemu-devel] [PULL 19/35] AioContext: introduce aio_prepare, Stefan Hajnoczi, 2014/08/29
- [Qemu-devel] [PULL 20/35] qemu-coroutine-io: fix for Win32, Stefan Hajnoczi, 2014/08/29
- [Qemu-devel] [PULL 22/35] sheepdog: fix a core dump while do auto-reconnecting, Stefan Hajnoczi, 2014/08/29
- [Qemu-devel] [PULL 21/35] aio-win32: add support for sockets, Stefan Hajnoczi, 2014/08/29
- [Qemu-devel] [PULL 23/35] nbd: Drop nbd_can_read(), Stefan Hajnoczi, 2014/08/29
- [Qemu-devel] [PULL 24/35] block: Add AIO context notifiers, Stefan Hajnoczi, 2014/08/29
- [Qemu-devel] [PULL 26/35] block: fix overlapping multiwrite requests, Stefan Hajnoczi, 2014/08/29
- [Qemu-devel] [PULL 27/35] qemu-iotests: add multiwrite test cases, Stefan Hajnoczi, 2014/08/29