qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v4 0/7] Drop in_use from BlockDriverState and en


From: Fam Zheng
Subject: Re: [Qemu-devel] [PATCH v4 0/7] Drop in_use from BlockDriverState and enable point-in-time snapshot exporting over NBD
Date: Sat, 23 Nov 2013 19:33:49 +0800
User-agent: Mutt/1.5.21 (2010-09-15)

On Fri, 11/22 17:58, Stefan Hajnoczi wrote:
> On Fri, Nov 22, 2013 at 01:24:47PM +0800, Fam Zheng wrote:
> > This series adds for point-in-time snapshot NBD exporting based on
> > blockdev-backup (variant of drive-backup with existing device as target).
> > 
> > We get a thin point-in-time snapshot by COW mechanism of drive-backup, and
> > export it through built in NBD server. The steps are as below:
> > 
> >  1. (SHELL) qemu-img create -f qcow2 BACKUP.qcow2 <source size here>
> > 
> >     (Alternatively we can use -o backing_file=RUNNING-VM.img to omit
> >     explicitly providing the size by ourselves, but it's risky because
> >     RUNNING-VM.qcow2 is used r/w by guest. Whether or not setting backing
> >     file in the image file doesn't matter, as we are going to override the
> >     backing hd in the next step)
> > 
> >  2. (QMP) blockdev-add backing=source-drive file.driver=file
> >  file.filename=BACKUP.qcow2 id=target0 if=none driver=qcow2
> > 
> >     (where ide0-hd0 is the running BlockDriverState name for
> >     RUNNING-VM.img. This patch implements "backing=" option to override
> >     backing_hd for added drive)
> > 
> >  3. (QMP) blockdev-backup device=source-drive sync=none target=target0
> > 
> >     (this is the QMP command introduced by this series, which use a named
> >     device as target of drive-backup)
> > 
> >  4. (QMP) nbd-server-add device=target0
> > 
> > When image fleecing done:
> > 
> >  1. (QMP) block-job-complete device=ide0-hd0
> > 
> >  2. (HMP) drive_del target0
> > 
> >  3. (SHELL) rm BACKUP.qcow2
> 
> Interesting implementation, it looks pretty good.  I'll need to review it a
> second time to track all the operation block/unblocks.  It wasn't immediately
> clear to me whether these patches will restrict something that used to work.
> 

Good question, I asked myself too. :)

In some point in the middle of the series it should be theoretically the same
as before. But I did add some more blocker checks, E.g. NBD is blocked if
there's a block job, but starting nbd add doesn't add blocker.  So we can
nbd_server_add then start block job, but not in the other order. This is kind
of weird. I will also look back again.

I think another option is a compatibility matrix, to simplify the blocker
interface to bdrv_op_try_start(bs, op) + bdrv_op_end(bs, op): We get less
flexibility, but don't need dynanically allocated blocker from caller. The
advantage is that it's more centralized logic, so it's easy to manage.

Fam



reply via email to

[Prev in Thread] Current Thread [Next in Thread]