qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] qcow2: Restore total_sectors value in save_vmst


From: Kevin Wolf
Subject: Re: [Qemu-devel] [PATCH] qcow2: Restore total_sectors value in save_vmstate
Date: Thu, 24 Oct 2013 11:52:07 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

Am 23.10.2013 um 19:03 hat Max Reitz geschrieben:
> On 2013-10-21 22:36, Eric Blake wrote:
> >On 10/20/2013 07:28 PM, Max Reitz wrote:
> >>Since df2a6f29a5, bdrv_co_do_writev increases the total_sectors value of
> >>a growable block devices on writes after the current end. This leads to
> >>the virtual disk apparently growing in qcow2_save_vmstate, which in turn
> >>affects the disk size captured by the internal snapshot taken directly
> >>afterwards through e.g. the HMP savevm command. Such a "grown" snapshot
> >>cannot be loaded after reopening the qcow2 image, since its disk size
> >>differs from the actual virtual disk size (writing a VM state does not
> >>actually increase the virtual disk size).
> >>
> >>Fix this by restoring total_sectors at the end of qcow2_save_vmstate.
> >>
> >>Signed-off-by: Max Reitz <address@hidden>
> >>---
> >>  block/qcow2.c | 5 +++++
> >>  1 file changed, 5 insertions(+)
> >>
> >>@@ -1946,6 +1947,10 @@ static int qcow2_save_vmstate(BlockDriverState *bs, 
> >>QEMUIOVector *qiov,
> >>      bs->growable = 1;
> >>      ret = bdrv_pwritev(bs, qcow2_vm_state_offset(s) + pos, qiov);
> >>      bs->growable = growable;
> >>+    // bdrv_co_do_writev will have increased the total_sectors value to 
> >>include
> >>+    // the VM state - the VM state is however not an actual part of the 
> >>block
> >>+    // device, therefore, we need to restore the old value.
> >>+    bs->total_sectors = total_sectors;
> >It looks like // comments aren't forbidden, but also uncommon; I don't
> >know if /**/ would be better.  At any rate:
> 
> Ah, right, sorry, I forgot.

Thanks, fixed up the command and applied to the block branch.

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]