[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Qemu-block] [PATCH 05/10] aio-posix: split aio_dispatch_handlers out of
From: |
Paolo Bonzini |
Subject: |
[Qemu-block] [PATCH 05/10] aio-posix: split aio_dispatch_handlers out of aio_dispatch |
Date: |
Wed, 21 Dec 2016 15:03:46 +0100 |
This simplifies the handling of dispatch_fds.
Signed-off-by: Paolo Bonzini <address@hidden>
---
aio-posix.c | 43 +++++++++++++++++++++++++------------------
1 file changed, 25 insertions(+), 18 deletions(-)
diff --git a/aio-posix.c b/aio-posix.c
index 1585571..25198d9 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -367,31 +367,16 @@ bool aio_pending(AioContext *ctx)
return false;
}
-/*
- * Note that dispatch_fds == false has the side-effect of post-poning the
- * freeing of deleted handlers.
- */
-bool aio_dispatch(AioContext *ctx, bool dispatch_fds)
+static bool aio_dispatch_handlers(AioContext *ctx)
{
- AioHandler *node = NULL;
+ AioHandler *node;
bool progress = false;
/*
- * If there are callbacks left that have been queued, we need to call them.
- * Do not call select in this case, because it is possible that the caller
- * does not need a complete flush (as is the case for aio_poll loops).
- */
- if (aio_bh_poll(ctx)) {
- progress = true;
- }
-
- /*
* We have to walk very carefully in case aio_set_fd_handler is
* called while we're walking.
*/
- if (dispatch_fds) {
- node = QLIST_FIRST(&ctx->aio_handlers);
- }
+ node = QLIST_FIRST(&ctx->aio_handlers);
while (node) {
AioHandler *tmp;
int revents;
@@ -431,6 +416,28 @@ bool aio_dispatch(AioContext *ctx, bool dispatch_fds)
}
}
+ return progress;
+}
+
+/*
+ * Note that dispatch_fds == false has the side-effect of post-poning the
+ * freeing of deleted handlers.
+ */
+bool aio_dispatch(AioContext *ctx, bool dispatch_fds)
+{
+ bool progress;
+
+ /*
+ * If there are callbacks left that have been queued, we need to call them.
+ * Do not call select in this case, because it is possible that the caller
+ * does not need a complete flush (as is the case for aio_poll loops).
+ */
+ progress = aio_bh_poll(ctx);
+
+ if (dispatch_fds) {
+ progress |= aio_dispatch_handlers(ctx);
+ }
+
/* Run our timers */
progress |= timerlistgroup_run_timers(&ctx->tlg);
--
2.9.3
- [Qemu-block] [PATCH v2 00/10] aio_context_acquire/release pushdown, part 1, Paolo Bonzini, 2016/12/21
- [Qemu-block] [PATCH 01/10] aio: rename bh_lock to list_lock, Paolo Bonzini, 2016/12/21
- [Qemu-block] [PATCH 03/10] aio: make ctx->list_lock a QemuLockCnt, subsuming ctx->walking_bh, Paolo Bonzini, 2016/12/21
- [Qemu-block] [PATCH 02/10] qemu-thread: introduce QemuLockCnt, Paolo Bonzini, 2016/12/21
- [Qemu-block] [PATCH 05/10] aio-posix: split aio_dispatch_handlers out of aio_dispatch,
Paolo Bonzini <=
- [Qemu-block] [PATCH 07/10] aio-posix: remove walking_handlers, protecting AioHandler list with list_lock, Paolo Bonzini, 2016/12/21
- [Qemu-block] [PATCH 09/10] aio: document locking, Paolo Bonzini, 2016/12/21
- [Qemu-block] [PATCH 04/10] qemu-thread: optimize QemuLockCnt with futexes on Linux, Paolo Bonzini, 2016/12/21
- [Qemu-block] [PATCH 06/10] aio: tweak walking in dispatch phase, Paolo Bonzini, 2016/12/21
- [Qemu-block] [PATCH 10/10] async: optimize aio_bh_poll, Paolo Bonzini, 2016/12/21
- [Qemu-block] [PATCH 08/10] aio-win32: remove walking_handlers, protecting AioHandler list with list_lock, Paolo Bonzini, 2016/12/21