qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH v5 4/5] Inter-VM shared memory PCI device


From: Avi Kivity
Subject: [Qemu-devel] Re: [PATCH v5 4/5] Inter-VM shared memory PCI device
Date: Tue, 11 May 2010 21:13:06 +0300
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.9) Gecko/20100330 Fedora/3.0.4-1.fc12 Thunderbird/3.0.4

On 05/11/2010 08:05 PM, Anthony Liguori wrote:
On 05/11/2010 11:39 AM, Cam Macdonell wrote:

Most of the people I hear from who are using my patch are using a peer
model to share data between applications (simulations, JVMs, etc).
But guest-to-host applications work as well of course.

I think "transparent migration" can be achieved by making the
connected/disconnected state transparent to the application.

When using the shared memory server, the server has to be setup anyway
on the new host and copying the memory region could be part of that as
well if the application needs the contents preserved.  I don't think
it has to be handled by the savevm/loadvm operations.  There's little
difference between naming one VM the master or letting the shared
memory server act like a master.

Except that to make it work with the shared memory server, you need the server to participate in the live migration protocol which is something I'd prefer to avoid at it introduces additional down time.

We can tunnel its migration data through qemu. Of course, gathering its dirty bitmap will be interesting. DSM may be the way to go here (we can even live migrate qemu through DSM: share the guest address space and immediately start running on the destination node; the guest will fault its memory to the destination. An advantage is that that the cpu load is immediately transferred.

--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]