qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH v5 4/5] Inter-VM shared memory PCI device


From: Avi Kivity
Subject: [Qemu-devel] Re: [PATCH v5 4/5] Inter-VM shared memory PCI device
Date: Tue, 11 May 2010 21:09:40 +0300
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.9) Gecko/20100330 Fedora/3.0.4-1.fc12 Thunderbird/3.0.4

On 05/11/2010 06:51 PM, Anthony Liguori wrote:
On 05/11/2010 09:53 AM, Avi Kivity wrote:
On 05/11/2010 05:17 PM, Cam Macdonell wrote:

The master is the shared memory area. It's a completely separate entity that is represented by the backing file (or shared memory server handing out
the fd to mmap).  It can exists independently of any guest.
I think the master/peer idea would be necessary if we were sharing
guest memory (sharing guest A's memory with guest B).  Then if the
master (guest A) dies, perhaps something needs to happen to preserve
the memory contents.

Definitely.  But we aren't...

Then transparent live migration is impossible. IMHO, that's a fundamental mistake that we will regret down the road.

I don't see why the two cases are any different. In all cases, all guests have to be migrated simultaneously, or we have to support distributed shared memory (likely at the kernel level). Who owns the memory makes no difference.

There is a two non-transparent variants:
- forcibly disconnect the migrating guest, and migrate it later
  - puts all the burden on the guest application
- ask the guest to detach from the memory device
  - host is at the mercy of the guest

Since the consumers of shared memory are academia, they'll probably implement DSM.


   But since we're sharing host memory, the
applications in the guests can race to determine the master by
grabbing a lock at offset 0 or by using lowest VM ID.

Looking at it another way, it is the applications using shared memory
that may or may not need a master, the Qemu processes don't need the
concept of a master since the memory belongs to the host.

Exactly. Furthermore, even in a master/slave relationship, there will be different masters for different sub-areas, it would be a pity to expose all this in the hardware abstraction. This way we have an external device, and PCI HBAs which connect to it - just like a multi-tailed SCSI disk.

To support transparent live migration, it's necessary to do two things:

1) Preserve the memory contents of the PCI BAR after disconnected from a shared memory segment 2) Synchronize any changes made to the PCI BAR with the shared memory segment upon reconnect/initial connection.

Disconnect/reconnect mean it's no longer transparent.


N.B. savevm/loadvm both constitute disconnect and reconnect events respectively.

Supporting (1) is easy since we just need to memcpy() the contents of the shared memory segment to a temporary RAM area upon disconnect.

Supporting (2) is easy when the shared memory segment is viewed as owned by the guest since it has the definitive copy of the data. IMHO, this is what role=master means.

There is no 'the guest', if the memory is to be shared there will be multiple guests (or multiple entities).

However, if we want to support a model where the guest does not have a definitive copy of the data, upon reconnect, we need to throw away the guest's changes and make the shared memory segment appear to simultaneously update to the guest. This is what role=peer means.

For role=peer, it's necessary to signal to the guest when it's not connected. This means prior to savevm it's necessary to indicate to the guest that it's been disconnected.

I think it's important that we build this mechanism in from the start because as I've stated in the past, I don't think role=peer is going to be the dominant use-case. I actually don't think that shared memory between guests is all that interesting compared to shared memory to an external process on the host.

I'd like to avoid making the distinction.  Why limit at the outset?

--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]