qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Re: [SeaBIOS] [PATCH 0/8] option rom loading overhaul.


From: Anthony Liguori
Subject: Re: [Qemu-devel] Re: [SeaBIOS] [PATCH 0/8] option rom loading overhaul.
Date: Tue, 22 Dec 2009 10:16:42 -0600
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.5) Gecko/20091209 Fedora/3.0-4.fc12 Thunderbird/3.0

On 12/22/2009 09:54 AM, Paul Brook wrote:
Ram allocations should be associated with a device. The VMState stuff
this should make this fairly straightforward.
Right, but for the sake of simplicity, you don't want to treat that ram
any differently than main ram wrt live migration.  That's why I proposed
adding a context id for each ram region.  That would allow us to use
something like the qdev name + id as the context id for a ram chunk to
get that association while still doing live ram migration of the memory.
IMO the best way to do this is to do it via existing VMState machinery.
We've already matched up DeviceStates so this gets us a handy unique
identifier for every ram block. For system memory we can add a dummy device.
Medium term we're probably going to want this anyway.

Okay, I understand and agree.

I think the way this would work is that we would have a ram_addr type for VMState that would be an actual ram allocation and size. qemu_ram_alloc() would not need to take a context. ram live migration would walk the list of registered VMState entries searching for anything that had a ram_addr type and would add that to the ram migration.

For system ram, we need dummy devices.

I think we probably ought to integrate VMState into qdev first though. I think that makes everything a bit more managable.

Guest address space mappings are a completely separate issue. The device
should be migrating the mappings (directly or via a PCI BAR) as part of
its state migration. The ram regions might not be mapped into guest
address space at all.
We don't migrate guest address space memory today.  We migrate anything
that's qemu_ram_alloc()'d.  The big problem we have though is that we
don't have any real association between the qemu_ram_alloc() results and
what the context of the allocation was.  We assume the order of these
allocations are fixed and that's entirely wrong.
The nice thing about the VMState approach is that the device doesn't know or
care how the migration occurs. For bonus points it leads fairly directly to an
object based mapping API, so we can change the implementation or migrate the
ram to a different location without disturbing the device.

Yeah, I like it.

Regards,

Anthony Liguori

Paul





reply via email to

[Prev in Thread] Current Thread [Next in Thread]