qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH] remove smaller slots if registering a bigger on


From: Jan Kiszka
Subject: [Qemu-devel] Re: [PATCH] remove smaller slots if registering a bigger one
Date: Wed, 11 Feb 2009 15:45:45 +0100
User-agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); de; rv:1.8.1.12) Gecko/20080226 SUSE/2.0.0.12-1.1 Thunderbird/2.0.0.12 Mnenhy/0.7.5.666

Glauber Costa wrote:
> On Wed, Feb 11, 2009 at 03:23:07PM +0100, Jan Kiszka wrote:
>> Glauber Costa wrote:
>>> It's like a shark eating a bunch of small fishes:
>>> in some situations (vga linear frame buffer mapping,
>>> for example), we need to register a new slot in place
>>> of older, smaller ones. This patch handles this case
>>>
>>> Signed-off-by: Glauber Costa <address@hidden>
>>> ---
>>>  kvm-all.c |   10 ++++++++++
>>>  1 files changed, 10 insertions(+), 0 deletions(-)
>>>
>>> diff --git a/kvm-all.c b/kvm-all.c
>>> index 9fb295c..53aca0a 100644
>>> --- a/kvm-all.c
>>> +++ b/kvm-all.c
>>> @@ -582,6 +582,16 @@ void kvm_set_phys_mem(target_phys_addr_t start_addr,
>>>                  kvm_set_phys_mem(mem_start, mem_size, mem_offset);
>>>  
>>>              return;
>>> +        } else if (start_addr <= mem->start_addr &&
>>> +                   (start_addr + size) >= (mem->start_addr +
>>> +                                           mem->memory_size)) {
>>> +            KVMSlot slot;
>>> +            /* unregister whole slot */
>>> +            memcpy(&slot, mem, sizeof(slot));
>>> +            mem->memory_size = 0;
>>> +            kvm_set_user_memory_region(s, mem);
>>> +
>>> +            kvm_set_phys_mem(start_addr, size, phys_offset);
>> That may solve some problems, but...
>>
>>>          } else {
>>>              printf("Registering overlapping slot\n");
>>>              abort();
>>             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>> ...as long as this line exists, issues will remain. IIRC, the mapping
>> the i440 tries to re-establish after reboot will hit this case.
> Which is fine. I'd prefer it to be here, so we can analyse it case by case.
> The old memory code for kvm was totally messy, in part because we tried to
> hug the world at once, with some code paths that were almost never hit.
> 
> Slot management can easily get very complicated. and trying to come up
> with a solution that accounts for all problems at once may backfire on us.
> 

Well, then "fix" all users...

> 
>>
>> BTW, I found the unposted patch below in my attic, maybe you can comment
>> on it (if it makes sense, I'll properly repost with signed-off).
> I don't believe it makes much sense.
> 
>> @@ -540,7 +540,7 @@ void kvm_set_phys_mem(target_phys_addr_t start_addr,
>>  
>>      mem = kvm_lookup_slot(s, start_addr);
>>      if (mem) {
>> -        if ((flags == IO_MEM_UNASSIGNED) || (flags >= TLB_MMIO)) {
>> +        if (flags >= IO_MEM_UNASSIGNED) {
>>              mem->memory_size = 0;
>>              mem->start_addr = start_addr;
>>              mem->phys_offset = 0;
> vga seem to be a heavy user of this kind of construct. We map some piece
> of memory as RAM, and later on, it becomes an mmio region again. In this
> case, we have to delete it from kvm slot list.
> 
> Now, if you remember the last memory patches I sent, it actually removes
> this line. However, this is because in that alternative, we were tracking
> mmio regions in qemu, but not in kvm. so flags >= TLB_MMIO would just
> delete it from the kernel mapping, but qemu would not forget about it.
> 
> This is to show that I believe that it might be possible to handle mmio
> regions slightly different, but I don't think just dropping this line would
> help much.

What confused me most here - and still does - is that TLB_* flags make
semantically no sense to me in the physical memory
registering/unregistering context. They should only show up in
CPUTLBEntry, no? But maybe I'm overseeing some strange relation.

Jan

-- 
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux




reply via email to

[Prev in Thread] Current Thread [Next in Thread]