qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] RFC [PATCH] Make bdrv_flush synchronous only and update cal


From: Charlie Shepherd
Subject: [Qemu-devel] RFC [PATCH] Make bdrv_flush synchronous only and update callers
Date: Thu, 18 Jul 2013 23:21:42 +0200

This patch makes bdrv_flush a synchronous function and updates any callers from
a coroutine context to use bdrv_co_flush instead.

The motivation for this patch comes from the GSoC Continuation-Passing C
project. When coroutines were introduced, synchronous functions in the block
layer were converted to use asynchronous methods by dynamically detecting if
they were being run from a coroutine context by calling qemu_in_coroutine(), and
yielding if so. If they were not, they would spawn a new coroutine and poll
until the asynchronous counterpart finished.

However this approach does not work with CPC as the CPC translator converts all
functions annotated coroutine_fn to a different (continuation based) calling
convention. This means that coroutine_fn annotated functions cannot be called
from a non-coroutine context.

This patch is a Request For Comments on the approach of splitting these
"dynamic" functions into synchronous and asynchronous versions. This is easy for
bdrv_flush as it already has an asynchronous counterpart - bdrv_co_flush. The
only caller of bdrv_flush from a coroutine context is mirror_drain in
block/mirror.c - this should be annotated as a coroutine_fn as it calls
qemu_coroutine_yield().

If this approach meets with approval I will develop a patchset splitting the
other "dynamic" functions in the block layer. This will allow all coroutine
functions to have a coroutine_fn annotation that can be statically checked (CPC
can be used to verify annotations).

I have audited the other callers of bdrv_flush, they are included below:

block.c: bdrv_reopen_prepare, bdrv_close, bdrv_commit, bdrv_pwrite_sync
block/qcow2-cache.c: qcow2_cache_entry_flush, qcow2_cache_flush
block/qcow2-refcount.c: qcow2_update_snapshot_refcount
block/qcow2-snapshot.c: qcow2_write_snapshots
block/qcow2.c: qcow2_mark_dirty, qcow2_mark_clean
block/qed-check.c: qed_check_mark_clean
block/qed.c: bdrv_qed_open, bdrv_qed_close
blockdev.c: external_snapshot_prepare, do_drive_del
cpus.c: do_vm_stop
hw/block/nvme.c: nvme_clear_ctrl
qemu-io-cmds.c: flush_f
savevm.c: bdrv_fclose

---
 block.c        | 13 ++++---------
 block/mirror.c |  4 ++--
 2 files changed, 6 insertions(+), 11 deletions(-)

diff --git a/block.c b/block.c
index 6c493ad..00d71df 100644
--- a/block.c
+++ b/block.c
@@ -4110,15 +4110,10 @@ int bdrv_flush(BlockDriverState *bs)
         .ret = NOT_DONE,
     };
 
-    if (qemu_in_coroutine()) {
-        /* Fast-path if already in coroutine context */
-        bdrv_flush_co_entry(&rwco);
-    } else {
-        co = qemu_coroutine_create(bdrv_flush_co_entry);
-        qemu_coroutine_enter(co, &rwco);
-        while (rwco.ret == NOT_DONE) {
-            qemu_aio_wait();
-        }
+    co = qemu_coroutine_create(bdrv_flush_co_entry);
+    qemu_coroutine_enter(co, &rwco);
+    while (rwco.ret == NOT_DONE) {
+        qemu_aio_wait();
     }
 
     return rwco.ret;
diff --git a/block/mirror.c b/block/mirror.c
index bed4a7e..3d5da7e 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -282,7 +282,7 @@ static void mirror_free_init(MirrorBlockJob *s)
     }
 }
 
-static void mirror_drain(MirrorBlockJob *s)
+static void coroutine_fn mirror_drain(MirrorBlockJob *s)
 {
     while (s->in_flight > 0) {
         qemu_coroutine_yield();
@@ -390,7 +390,7 @@ static void coroutine_fn mirror_run(void *opaque)
         should_complete = false;
         if (s->in_flight == 0 && cnt == 0) {
             trace_mirror_before_flush(s);
-            ret = bdrv_flush(s->target);
+            ret = bdrv_co_flush(s->target);
             if (ret < 0) {
                 if (mirror_error_action(s, false, -ret) == BDRV_ACTION_REPORT) 
{
                     goto immediate_exit;
-- 
1.8.3.2




reply via email to

[Prev in Thread] Current Thread [Next in Thread]