[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Qemu-devel] [PATCH v3 00/16] aio: first part of aio_context_acquire/rel
From: |
Paolo Bonzini |
Subject: |
[Qemu-devel] [PATCH v3 00/16] aio: first part of aio_context_acquire/release pushdown |
Date: |
Tue, 9 Feb 2016 12:45:58 +0100 |
This is the infrastructure part of the aio_context_acquire/release pushdown,
which in turn is the first step towards a real multiqueue block layer in
QEMU. The next step is to touch all the drivers and move calls to the
aio_context_acquire/release functions from aio-*.c to the drivers. This
will be done in a separate patch series, which I plan to post before soft
freeze.
While the inserted lines are a lot, more than half of it are in documentation
and formal models of the code, as well as in the implementation of the new
"lockcnt" synchronization primitive. The code is also very heavily commented.
The first four patches are new, as the issue they fix was found after posting
the previous patch. Everything else is more or less the same as before.
Paolo
v1->v2: Update documentation [Stefan]
Remove g_usleep from testcase [Stefan]
v2->v3: Fix broken sentence [Eric]
Use osdep.h [Eric]
(v2->v3 diff after diffstat)
Paolo Bonzini (16):
aio: introduce aio_context_in_iothread
aio: do not really acquire/release the main AIO context
aio: introduce aio_poll_internal
aio: only call aio_poll_internal from iothread
iothread: release AioContext around aio_poll
qemu-thread: introduce QemuRecMutex
aio: convert from RFifoLock to QemuRecMutex
aio: rename bh_lock to list_lock
qemu-thread: introduce QemuLockCnt
aio: make ctx->list_lock a QemuLockCnt, subsuming ctx->walking_bh
qemu-thread: optimize QemuLockCnt with futexes on Linux
aio: tweak walking in dispatch phase
aio-posix: remove walking_handlers, protecting AioHandler list with
list_lock
aio-win32: remove walking_handlers, protecting AioHandler list with
list_lock
aio: document locking
aio: push aio_context_acquire/release down to dispatching
aio-posix.c | 86 +++++----
aio-win32.c | 106 ++++++-----
async.c | 278 ++++++++++++++++++++++++----
block/io.c | 14 +-
docs/aio_poll_drain.promela | 210 +++++++++++++++++++++
docs/aio_poll_drain_bug.promela | 158 ++++++++++++++++
docs/aio_poll_sync_io.promela | 88 +++++++++
docs/lockcnt.txt | 342 ++++++++++++++++++++++++++++++++++
docs/multiple-iothreads.txt | 63 ++++---
include/block/aio.h | 69 ++++---
include/qemu/futex.h | 36 ++++
include/qemu/rfifolock.h | 54 ------
include/qemu/thread-posix.h | 6 +
include/qemu/thread-win32.h | 10 +
include/qemu/thread.h | 23 +++
iothread.c | 20 +-
stubs/iothread-lock.c | 5 +
tests/.gitignore | 1 -
tests/Makefile | 2 -
tests/test-aio.c | 22 ++-
tests/test-rfifolock.c | 91 ---------
trace-events | 10 +
util/Makefile.objs | 2 +-
util/lockcnt.c | 395 ++++++++++++++++++++++++++++++++++++++++
util/qemu-thread-posix.c | 38 ++--
util/qemu-thread-win32.c | 25 +++
util/rfifolock.c | 78 --------
27 files changed, 1782 insertions(+), 450 deletions(-)
create mode 100644 docs/aio_poll_drain.promela
create mode 100644 docs/aio_poll_drain_bug.promela
create mode 100644 docs/aio_poll_sync_io.promela
create mode 100644 docs/lockcnt.txt
create mode 100644 include/qemu/futex.h
delete mode 100644 include/qemu/rfifolock.h
delete mode 100644 tests/test-rfifolock.c
create mode 100644 util/lockcnt.c
delete mode 100644 util/rfifolock.c
--
2.5.0
v2->v3:
diff --git a/async.c b/async.c
index 9eab833..03a8e69 100644
--- a/async.c
+++ b/async.c
@@ -322,11 +322,10 @@ void aio_notify_accept(AioContext *ctx)
* only, this only works when the calling thread holds the big QEMU lock.
*
* Because aio_poll is used in a loop, spurious wakeups are okay.
- * Therefore, the I/O thread calls qemu_event_set very liberally
- * (it helps that qemu_event_set is cheap on an already-set event).
- * generally used in a loop, it's okay to have spurious wakeups.
- * Similarly it is okay to return true when no progress was made
- * (as long as this doesn't happen forever, or you get livelock).
+ * Therefore, the I/O thread calls qemu_event_set very liberally;
+ * it helps that qemu_event_set is cheap on an already-set event.
+ * Similarly it is okay to return true when no progress was made,
+ * as long as this doesn't happen forever (or you get livelock).
*
* The important thing is that you need to report progress from
* aio_poll(ctx, false) correctly. This is complicated and the
diff --git a/util/lockcnt.c b/util/lockcnt.c
index 56eb29e..71e8f8f 100644
--- a/util/lockcnt.c
+++ b/util/lockcnt.c
@@ -6,16 +6,7 @@
* Author:
* Paolo Bonzini <address@hidden>
*/
-#include <stdlib.h>
-#include <stdio.h>
-#include <errno.h>
-#include <time.h>
-#include <signal.h>
-#include <stdint.h>
-#include <string.h>
-#include <limits.h>
-#include <unistd.h>
-#include <sys/time.h>
+#include "qemu/osdep.h"
#include "qemu/thread.h"
#include "qemu/atomic.h"
#include "trace.h"
- [Qemu-devel] [PATCH v3 00/16] aio: first part of aio_context_acquire/release pushdown,
Paolo Bonzini <=
- [Qemu-devel] [PATCH 01/16] aio: introduce aio_context_in_iothread, Paolo Bonzini, 2016/02/09
- [Qemu-devel] [PATCH 02/16] aio: do not really acquire/release the main AIO context, Paolo Bonzini, 2016/02/09
- [Qemu-devel] [PATCH 03/16] aio: introduce aio_poll_internal, Paolo Bonzini, 2016/02/09
- [Qemu-devel] [PATCH 04/16] aio: only call aio_poll_internal from iothread, Paolo Bonzini, 2016/02/09
- [Qemu-devel] [PATCH 07/16] aio: convert from RFifoLock to QemuRecMutex, Paolo Bonzini, 2016/02/09
- [Qemu-devel] [PATCH 06/16] qemu-thread: introduce QemuRecMutex, Paolo Bonzini, 2016/02/09
- [Qemu-devel] [PATCH 05/16] iothread: release AioContext around aio_poll, Paolo Bonzini, 2016/02/09
- [Qemu-devel] [PATCH 08/16] aio: rename bh_lock to list_lock, Paolo Bonzini, 2016/02/09
- [Qemu-devel] [PATCH 11/16] qemu-thread: optimize QemuLockCnt with futexes on Linux, Paolo Bonzini, 2016/02/09
- [Qemu-devel] [PATCH 09/16] qemu-thread: introduce QemuLockCnt, Paolo Bonzini, 2016/02/09