qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Re: KVM call agenda for Oct 19


From: Avi Kivity
Subject: Re: [Qemu-devel] Re: KVM call agenda for Oct 19
Date: Tue, 19 Oct 2010 15:03:50 +0200
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.9) Gecko/20100921 Fedora/3.1.4-1.fc13 Lightning/1.0b3pre Thunderbird/3.1.4

 On 10/19/2010 02:58 PM, Dor Laor wrote:
On 10/19/2010 02:55 PM, Avi Kivity wrote:
On 10/19/2010 02:48 PM, Dor Laor wrote:
On 10/19/2010 04:11 AM, Chris Wright wrote:
* Juan Quintela (address@hidden) wrote:

Please send in any agenda items you are interested in covering.

- 0.13.X -stable handoff
- 0.14 planning
- threadlet work
- virtfs proposals


- Live snapshots
- We were asked to add this feature for external qcow2
images. Will simple approach of fsync + tracking each requested
backing file (it can be per vDisk) and re-open the new image would
be accepted?
- Integration with FS freeze for consistent guest app snapshot
Many apps do not sync their ram state to disk correctly or frequent
enough. Physical world backup software calls fs freeze on xfs and
VSS for windows to make the backup consistent.
In order to integrated this with live snapshots we need a guest
agent to trigger the guest fs freeze.
We can either have qemu communicate with the agent directly through
virtio-serial or have a mgmt daemon use virtio-serial to
communicate with the guest in addition to QMP messages about the
live snapshot state.
Preferences? The first solution complicates qemu while the second
complicates mgmt.

Third option, make the freeze path management -> qemu -> virtio-blk ->
guest kernel -> file systems. The advantage is that it's easy to
associate file systems with a block device this way.

OTH the userspace freeze path already exist and now you create another path.

I guess we would still have a userspace daemon; instead of talking to virtio-serial it talks to virtio-blk. So:

management -> qemu -> virtio-blk -> guest driver -> kernel fs resolver -> daemon -> apps

Yuck.

What about FS that span over LVM with multiple drives? IDE/SCSI?

Good points.

--
error compiling committee.c: too many arguments to function




reply via email to

[Prev in Thread] Current Thread [Next in Thread]