qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] Memory API


From: Jan Kiszka
Subject: Re: [Qemu-devel] [RFC] Memory API
Date: Wed, 18 May 2011 18:00:34 +0200
User-agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); de; rv:1.8.1.12) Gecko/20080226 SUSE/2.0.0.12-1.1 Thunderbird/2.0.0.12 Mnenhy/0.7.5.666

On 2011-05-18 17:42, Avi Kivity wrote:
> On 05/18/2011 06:36 PM, Jan Kiszka wrote:
>>>
>>>  We need to head for the more hardware-like approach.  What happens when
>>>  you program overlapping BARs?  I imagine the result is
>>>  implementation-defined, but ends up with one region decoded in
>>>  preference to the other.  There is simply no way to reject an
>>>  overlapping mapping.
>>
>> But there is also now simple way to allow them. At least not without
>> exposing control about their ordering AND allowing to hook up managing
>> code (e.g. of the PCI bridge or the chipset) that controls registrations.
> 
> What about memory_region_add_subregion(..., int priority) as I suggested 
> in another message?

That's fine, but also requires a change how, or better where devices
register their regions.

> 
> Regarding bridges, every registration request flows through them so they 
> already have full control.

Not everything is PCI, we also have ISA e.g. If we were able to route
such requests also through a hierarchy of abstract bridges, then even
better.

> 
>> ...
>>>>  See [1]: We really need to get rid of slot management on
>>>>  CPUPhysMemoryClient side. Your API provides a perfect opportunity to
>>>>  establish the infrastructure of slot tracking at a central place. We can
>>>>  then switch from reporting cpu_registering_memory events to reporting
>>>>  coalesced changes to slots, those slot that also the core uses. So a new
>>>>  CPUPhysMemoryClient API needs to be considered in this API change as
>>>>  well - or we change twice in the end.
>>>
>>>  The kvm memory slots (and hopefully future qemu memory slots) are a
>>>  flattened view of the MemoryRegion tree.  There is no 1:1 mapping.
>>
>> We need a flatted view of your memory regions during runtime as well. No
>> major difference here. If we share that view with PhysMemClients, they
>> can drop most of their creative slot tracking algorithms, focusing on
>> the real differences.
> 
> We'll definitely have a flattened view (phys_desc is such a flattened 
> view, hopefully we'll have a better one).

phys_desc is not exportable. If we try (and we do from time to time...),
we end up with more slots than clients like kvm will ever be able to handle.

> 
> We can basically run a tree walk on each change, emitting ranges in 
> order and sending them to PhysMemClients.

I'm specifically thinking of fully trackable slot updates. The clients
should not have to maintain the flat layout. They should just receive
updates in the form of slot X added/modified/removed. For now, this
magic happens multiple times in the clients, and that is very bad.

Given that not only memory clients need that view but that ever TLB miss
(in TCG mode) requires to identify the effective slot as well, it might
be worth preparing a runtime structure at registration time that
supports this efficiently - but this time without wasting memory.

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux



reply via email to

[Prev in Thread] Current Thread [Next in Thread]