qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [Qemu-devel] [PATCH v2 10/11] blockjob: refactor backup


From: Kevin Wolf
Subject: Re: [Qemu-block] [Qemu-devel] [PATCH v2 10/11] blockjob: refactor backup_start as backup_job_create
Date: Tue, 11 Oct 2016 11:35:53 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

Am 11.10.2016 um 00:51 hat John Snow geschrieben:
> >>Sadly for me, I realized this patch has a potential problem. When we
> >>were adding the bitmap operations, it became clear that the
> >>atomicity point was during .prepare, not .commit.
> >>
> >>e.g. the bitmap is cleared or created during prepare, and backup_run
> >>installs its Write Notifier at that point in time, too.
> >
> >Strictly speaking that's wrong then.
> >
> 
> I agree, though I do remember this coming up during the bitmap
> review process that the current point-in-time spot is during prepare
> at the moment.
> 
> I do think that while it's at least a consistent model (The model
> where we do in fact commit during .prepare(), and simply undo or
> revert during .abort(), and only clean or remove undo-cache in
> .commit()) it certainly violates the principle of least surprise and
> is a little rude...

As long as we can reliably undo things in .abort (i.e. use operations
that can't fail) and keep the locks and the device drained, we should be
okay in terms of atomicity.

I think it's still nicer if we can enable things only in .commit, but
sometimes we have to use operations that could fail, so we have to do
them in .prepare.

The exact split between .prepare/.commit/.abort isn't visible on the
external interfaces as long as it's done correctly, so it doesn't
necessarily have to be the same for all commands.

> >The write notifier doesn't really hurt because it is never triggered
> >between prepare and commit (we're holding the lock) and it can just be
> >removed again.
> >
> >Clearing the bitmap is a bug because the caller could expect that the
> >bitmap is in its original state if the transaction fails. I doubt this
> >is a problem in practice, but we should fix it anyway.
> 
> We make a backup to undo the process if it fails. I only mention it
> to emphasize that the atomic point appears to be during prepare. In
> practice we hold the locks for the whole process, but... I think
> Paolo may be actively trying to change that.

Well, the whole .prepare/.commit or .prepare/.abort sequence is supposed
to be atomic, so it's really the same thing. Changing this would break
the transactional behaviour, so that's not possible anyway.

> >By the way, why did we allow to add a 'bitmap' option for DriveBackup
> >without adding it to BlockdevBackup at the same time?
> 
> I don't remember. I'm not sure anyone ever audited it to convince
> themselves it was a useful or safe thing to do. I believe at the
> time I was pushing for bitmaps in DriveBackup, Fam was still
> authoring the BlockdevBackup interface.

Hm, maybe that's why. I checked the commit dates of both (and there
BlockdevBackup was earlier), but I didn't check the development history.

Should we add it now or is it a bad idea?

> >>By changing BlockJobs to only run on commit, we've severed the
> >>atomicity point such that some actions will take effect during
> >>prepare, and others at commit.
> >>
> >>I still think it's the correct thing to do to delay the BlockJobs
> >>until the commit phase, so I will start auditing the code to see how
> >>hard it is to shift the atomicity point to commit instead. If it's
> >>possible to do that, I think from the POV of the managing
> >>application, having the atomicity point be
> >>
> >>Feel free to chime in with suggestions and counterpoints until then.
> >
> >I agree that jobs have to be started only at commit. There may be other
> >things that are currently happening in prepare that really should be
> >moved as well, but unless moving one thing but not the other doesn't
> >break anything that was working, we can fix one thing at a time.
> >
> >Kevin
> >
> 
> Alright, let's give this a whirl.
> 
> We have 8 transaction actions:
> 
> drive_backup
> blockdev_backup
> block_dirty_bitmap_add
> block_dirty_bitmap_clear
> abort
> blockdev_snapshot
> blockdev_snapshot_sync
> blockdev_snapshot_internal_sync
> 
> Drive and Blockdev backup are already modified to behave
> point-in-time at time of .commit() by changing them to only begin
> running once the commit phase occurs.
> 
> Bitmap add and clear are trivial to rework; clear just moves the
> call to clear in commit, with possibly some action taken to prevent
> the bitmap from become used by some other process in the meantime.
> Add is easy to rework too, we can create it during prepare but reset
> it back to zero during commit if necessary.
> 
> Abort needs no changes.
> 
> blockdev_snapshot[_sync] actually appears to already be doing the
> right thing, by only installing the new top layer during commit,
> which makes this action inconsistent by current semantics, but
> requires no changes to move to the desired new semantics.

This doesn't sound too bad.

> That leaves only the internal snapshot to worry about, which does
> admittedly look like quite the yak to shave. It's a bit out of scope
> for me, but Kevin, do you think this is possible?
> 
> Looks like implementations are qcow2, rbd, and sheepdog. I imagine
> this would need to be split into prepare and commit semantics to
> accommodate this change... though we don't have any meaningful
> control over the rbd implementation.
> 
> Any thoughts? I could conceivably just change everything over to
> working primarily during .commit(), and just argue that the locks
> held for the transaction are sufficient to leave the internal
> snapshot alone "for now," ...

Leave them alone. We don't really support atomic internal snapshots. We
could make some heavy refactoring in order to split the BlockDriver
callbacks into prepare/commit/abort, but that's probably not worth the
effort and would make some code that already isn't tested much a lot
more complex.

If we ever decided to get serious about internal snapshots, we could
still do this. I kind of like internal snapshots, but I doubt it will
happen.

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]