qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH v6 09/12] block: Add "drained begin/end" for int


From: Kevin Wolf
Subject: Re: [Qemu-block] [PATCH v6 09/12] block: Add "drained begin/end" for internal snapshot
Date: Thu, 22 Oct 2015 12:18:11 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

Am 22.10.2015 um 08:32 hat Fam Zheng geschrieben:
> This ensures the atomicity of the transaction by avoiding processing of
> external requests such as those from ioeventfd.
> 
> state->bs is assigned right after bdrv_drained_begin. Because it was
> used as the flag for deletion or not in abort, now we need a separate
> flag - InternalSnapshotState.created.
> 
> Signed-off-by: Fam Zheng <address@hidden>
> ---
>  blockdev.c | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/blockdev.c b/blockdev.c
> index 52f44b2..adc0e69 100644
> --- a/blockdev.c
> +++ b/blockdev.c
> @@ -1370,6 +1370,7 @@ typedef struct InternalSnapshotState {
>      BlockDriverState *bs;
>      AioContext *aio_context;
>      QEMUSnapshotInfo sn;
> +    bool created;
>  } InternalSnapshotState;
>  
>  static void internal_snapshot_prepare(BlkTransactionState *common,
> @@ -1414,6 +1415,9 @@ static void 
> internal_snapshot_prepare(BlkTransactionState *common,
>      }
>      bs = blk_bs(blk);
>  
> +    state->bs = bs;
> +    bdrv_drained_begin(bs);
> +
>      if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_INTERNAL_SNAPSHOT, errp)) {
>          return;
>      }

More context:

    /* AioContext is released in .clean() */
    state->aio_context = blk_get_aio_context(blk);
    aio_context_acquire(state->aio_context);

    if (!blk_is_available(blk)) {
        error_setg(errp, QERR_DEVICE_HAS_NO_MEDIUM, device);
        return;
    }
    bs = blk_bs(blk);

    state->bs = bs;
    bdrv_drained_begin(bs);

If we error out because of !blk_is_available(blk), we will still call
bdrv_drained_end() in .clean even though bdrv_drained_begin() hasn't
been called yet.

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]