qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] directory hierarchy


From: Avi Kivity
Subject: Re: [Qemu-devel] directory hierarchy
Date: Mon, 24 Sep 2012 11:54:50 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:15.0) Gecko/20120828 Thunderbird/15.0

On 09/23/2012 06:07 PM, Blue Swirl wrote:
> On Sun, Sep 23, 2012 at 8:25 AM, Avi Kivity <address@hidden> wrote:
>> On 09/22/2012 04:15 PM, Blue Swirl wrote:
>>> >
>>> >> This could have nice cleanup effects though and for example enable
>>> >> generic 'info vmtree' to discover VA->PA mappings for any target
>>> >> instead of current MMU table walkers.
>>> >
>>> > How?  That's in a hardware defined format that's completely invisible to
>>> > the memory API.
>>>
>>> It's invisible now, but target-specific code could grab the mappings
>>> and feed them to memory API. Memory API would just see the per-CPU
>>> virtual memory as address spaces that map to physical memory address
>>> space.
>>>
>>> For RAM backed MMU tables like x86 and Sparc32, writes to page table
>>> memory areas would need to be tracked like SMC. For in-MMU TLBs, this
>>> would not be needed.
>>>
>>> Again, if performance would degrade, this would not be worthwhile. I'd
>>> expect VA->PA mappings to change at least at context switch rate +
>>> page fault rate + mmap/exec activity so this could amount to thousands
>>> of changes per second per CPU.
>>>
>>> In theory KVM could use memory API as CPU type agnostic way to
>>> exchange this information, I'd expect that KVM exit rate is not nearly
>>> as big and in many cases exchange of mapping information would not be
>>> needed. It would not improve performance there either.
>>>
> 
> Perhaps I was not very clear, but this was just theoretical.
> 
>>
>> First, the memory API does not operate at that level.  It handles (guest
>> physical) -> (host virtual | io callback) translations.  These are
>> (guest virtual) -> (guest physical translations).
> 
> I don't see why memory API could not be used also for GVA-GPA
> translation if we ignore performance for the sake of discussion.

For the reasons I mentioned.  The guest doesn't issue calls into the
memory API.  The granularity is wrong.  It is a system-wide API.

The latter two issues have to change to support IOMMUs, and then indeed
the memory API will be much closer to a CPU MMU (on x86 they can even
share page tables in some circumstances).  It will still be the wrong
API IMO.


> 
>> Second, the memory API is machine-wide and designed for coarse maps.
>> Processor memory maps are per-cpu and page-grained.  (the memory API
>> actually needs to efficiently support page-grained maps (for iommus) and
>> per-cpu maps (smm), but that's another story).
>>
>> Third, we know from the pre-npt/ept days that tracking all mappings
>> destroys performance.  It's much better to do this on demand.
> 
> Yes, performance reasons kill this idea. It would still be beautiful.
> 

Maybe I'm missing something, but I don't see this.  But as you said,
it's theoretical.

-- 
error compiling committee.c: too many arguments to function



reply via email to

[Prev in Thread] Current Thread [Next in Thread]