qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Re: KVM call agenda for Oct 19


From: Ayal Baron
Subject: Re: [Qemu-devel] Re: KVM call agenda for Oct 19
Date: Tue, 19 Oct 2010 16:57:25 -0400 (EDT)

----- "Anthony Liguori" <address@hidden> wrote:

> On 10/19/2010 11:54 AM, Ayal Baron wrote:
> > ----- "Anthony Liguori"<address@hidden>  wrote:
> >
> >    
> >> On 10/19/2010 07:48 AM, Dor Laor wrote:
> >>      
> >>> On 10/19/2010 04:11 AM, Chris Wright wrote:
> >>>        
> >>>> * Juan Quintela (address@hidden) wrote:
> >>>>          
> >>>>> Please send in any agenda items you are interested in covering.
> >>>>>            
> >>>> - 0.13.X -stable handoff
> >>>> - 0.14 planning
> >>>> - threadlet work
> >>>> - virtfs proposals
> >>>>
> >>>>          
> >>> - Live snapshots
> >>>    - We were asked to add this feature for external qcow2
> >>>      images. Will simple approach of fsync + tracking each
> requested
> >>>      backing file (it can be per vDisk) and re-open the new image
> >>>        
> >> would
> >>      
> >>>      be accepted?
> >>>        
> >> I had assumed that this would involve:
> >>
> >> qemu -hda windows.img
> >>
> >> (qemu) snapshot ide0-disk0 snap0.img
> >>
> >> 1) create snap0.img internally by doing the equivalent of
> `qemu-img
> >> create -f qcow2 -b windows.img snap0.img'
> >> 2) bdrv_flush('ide0-disk0')
> >> 3) bdrv_open(snap0.img)
> >> 4) bdrv_close(windows.img)
> >> 5) rename('windows.img', 'windows.img.tmp')
> >> 6) rename('snap0.img', 'windows.img')
> >> 7) rename('windows.img.tmp', 'snap0.img')
> >>      
> > All the rename logic assumes files, need to take into account
> devices as well (namely LVs)
> >    
> 
> Sure, just s/rename/lvrename/g.

No can do.  In our setup, lvm is running in a clustered env in a single writer 
multiple readers configuration.  Vm may be running on a reader which is not 
allowed to lvrename (would corrupt the entire VG).

> 
> The renaming step can be optional and a management tool can take care
> of 
> that.  It's really just there for convenience since the user
> expectation 
> is that when you give a name of a snapshot, that the snapshot is 
> reflected in that name not that the new in-use image is that name.

So keeping it optional is good.

> 
> > Also, just to make sure, this should support multiple images
> (concurrent snapshot of all of them or a subset).
> >    
> 
> Yeah, concurrent is a little trickier.  Simple solution is for a 
> management tool to just do a stop + multiple snapshots + cont.  It's 
> equivalent to what we'd do if we don't do it aio which is probably how
> 
> we'd do the first implementation.
> 
> But in the long term, I think the most elegant solution would be to 
> expose the freeze api via QMP and let a management tool freeze
> multiple 
> devices, then start taking snapshots, then unfreeze them when all 
> snapshots are complete.
> 
> Regards,
> 
> Anthony Liguori

qemu should call the freeze as part of the process (for all of the relevant 
devices) then take the snapshots then thaw.

> 
> > Otherwise looks good.
> >
> >    
> >> Regards,
> >>
> >> Anthony Liguori
> >>
> >>      
> >>>    - Integration with FS freeze for consistent guest app snapshot
> >>>      Many apps do not sync their ram state to disk correctly or
> >>>        
> >> frequent
> >>      
> >>>      enough. Physical world backup software calls fs freeze on
> xfs
> >>>        
> >> and
> >>      
> >>>      VSS for windows to make the backup consistent.
> >>>      In order to integrated this with live snapshots we need a
> guest
> >>>      agent to trigger the guest fs freeze.
> >>>      We can either have qemu communicate with the agent directly
> >>>        
> >> through
> >>      
> >>>      virtio-serial or have a mgmt daemon use virtio-serial to
> >>>      communicate with the guest in addition to QMP messages about
> >>>        
> >> the
> >>      
> >>>      live snapshot state.
> >>>      Preferences? The first solution complicates qemu while the
> >>>        
> >> second
> >>      
> >>>      complicates mgmt.
> >>> -- 
> >>> To unsubscribe from this list: send the line "unsubscribe kvm" in
> >>> the body of a message to address@hidden
> >>> More majordomo info at 
> http://vger.kernel.org/majordomo-info.html
> >>>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]