qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Assertion failure taking external snapshot with virtio


From: Fam Zheng
Subject: Re: [Qemu-devel] Assertion failure taking external snapshot with virtio drive + iothread
Date: Tue, 21 Mar 2017 13:26:02 +0800
User-agent: Mutt/1.8.0 (2017-02-23)

On Fri, 03/17 09:55, Ed Swierk wrote:
> I'm running into the same problem taking an external snapshot with a
> virtio-blk drive with iothread, so it's not specific to virtio-scsi.
> Run a Linux guest on qemu master
> 
>   qemu-system-x86_64 -nographic -enable-kvm -monitor
> telnet:0.0.0.0:1234,server,nowait -m 1024 -object
> iothread,id=iothread1 -drive file=/x/drive.qcow2,if=none,id=drive0
> -device virtio-blk-pci,iothread=iothread1,drive=drive0
> 
> Then in the monitor
> 
>   snapshot_blkdev drive0 /x/snap1.qcow2
> 
> qemu bombs with
> 
>   qemu-system-x86_64: /x/qemu/include/block/aio.h:457:
> aio_enable_external: Assertion `ctx->external_disable_cnt > 0' failed.
> 
> whereas without the iothread the assertion failure does not occur.


Can you test this one?

---


diff --git a/blockdev.c b/blockdev.c
index c5b2c2c..4c217d5 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -1772,6 +1772,8 @@ static void external_snapshot_prepare(BlkActionState 
*common,
         return;
     }
 
+    bdrv_set_aio_context(state->new_bs, state->aio_context);
+
     /* This removes our old bs and adds the new bs. This is an operation that
      * can fail, so we need to do it in .prepare; undoing it for abort is
      * always possible. */
@@ -1789,8 +1791,6 @@ static void external_snapshot_commit(BlkActionState 
*common)
     ExternalSnapshotState *state =
                              DO_UPCAST(ExternalSnapshotState, common, common);
 
-    bdrv_set_aio_context(state->new_bs, state->aio_context);
-
     /* We don't need (or want) to use the transactional
      * bdrv_reopen_multiple() across all the entries at once, because we
      * don't want to abort all of them if one of them fails the reopen */



reply via email to

[Prev in Thread] Current Thread [Next in Thread]