qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 05/11 v10] Add API to get memory mapping


From: Wen Congyang
Subject: Re: [Qemu-devel] [PATCH 05/11 v10] Add API to get memory mapping
Date: Mon, 26 Mar 2012 10:44:40 +0800
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.9) Gecko/20100413 Fedora/3.0.4-2.fc13 Thunderbird/3.0.4

At 03/26/2012 10:31 AM, HATAYAMA Daisuke Wrote:
> From: Wen Congyang <address@hidden>
> Subject: Re: [PATCH 05/11 v10] Add API to get memory mapping
> Date: Mon, 26 Mar 2012 09:10:52 +0800
> 
>> At 03/23/2012 08:02 PM, HATAYAMA Daisuke Wrote:
>>> From: Wen Congyang <address@hidden>
>>> Subject: [PATCH 05/11 v10] Add API to get memory mapping
>>> Date: Tue, 20 Mar 2012 11:51:18 +0800
>>>
>>>> Add API to get all virtual address and physical address mapping.
>>>> If the guest doesn't use paging, the virtual address is equal to the 
>>>> phyical
>>>> address. The virtual address and physical address mapping is for gdb's 
>>>> user, and
>>>> it does not include the memory that is not referenced by the page table. 
>>>> So if
>>>> you want to use crash to anaylze the vmcore, please do not specify -p 
>>>> option.
>>>> the reason why the -p option is not default explicitly: guest machine in a
>>>> catastrophic state can have corrupted memory, which we cannot trust.
>>>>
>>>> Signed-off-by: Wen Congyang <address@hidden>
>>>> ---
>>>>  memory_mapping.c |   34 ++++++++++++++++++++++++++++++++++
>>>>  memory_mapping.h |   15 +++++++++++++++
>>>>  2 files changed, 49 insertions(+), 0 deletions(-)
>>>>
>>>> diff --git a/memory_mapping.c b/memory_mapping.c
>>>> index 718f271..b92e2f6 100644
>>>> --- a/memory_mapping.c
>>>> +++ b/memory_mapping.c
>>>> @@ -164,3 +164,37 @@ void memory_mapping_list_init(MemoryMappingList *list)
>>>>      list->last_mapping = NULL;
>>>>      QTAILQ_INIT(&list->head);
>>>>  }
>>>> +
>>>> +#if defined(CONFIG_HAVE_GET_MEMORY_MAPPING)
>>>> +int qemu_get_guest_memory_mapping(MemoryMappingList *list)
>>>> +{
>>>> +    CPUArchState *env;
>>>> +    RAMBlock *block;
>>>> +    ram_addr_t offset, length;
>>>> +    int ret;
>>>> +    bool paging_mode;
>>>> +
>>>> +    paging_mode = cpu_paging_enabled(first_cpu);
>>>> +    if (paging_mode) {
>>>
>>> On SMP with (n)-CPUs, we can do this check at most (n)-times.
>>>
>>> On Linux, user-mode tasks have differnet page tables. If refering to
>>> one page table, we can get one user-mode task memory only. Considering
>>> as much memory as possible, it's best to reference all CPUs with
>>> paging enabled and walk all the page tables.
>>>
>>> A problem is that linear addresses for user-mode tasks can inherently
>>> conflicts. Different user-mode tasks can have the same linear
>>> address. So, tools need to distinguish each PT_LOAD entry based on a
>>> pair of linear address and physical address, not linear address
>>> only. I don't know whether gdb does this.
>>
>> gdb only can process kernel space. Jan's gdb-python script may can process
>> user-mode tasks, but we should get user-mode task's register from the kernel
>> or note, and convest virtual address/linear address to physicall address.
>>
> 
> After I send this, I came up with the problem of page tabel coherency:
> some page table has not updated yet so we see older ones. So if we use

Tha page table is older? Do you mean the newest page table is in TLB and
is not flushed to memory?

> all the page tables referenced by all CPUs, we face inconsistency of
> some of the page tables. Essentially, we cannot avoid the issue that
> we see the page table older than the actual even if we use only one
> page table, but if restricting the use of page table to just one, we
> can at least avoid the inconsistency of multiple page tables. In other
> words, we can do paging processing normally though the table might be
> old.
> 
> So, I think
> - using page tables for all the CPUs at the same time is problematic.
> - using only one page table of the exsiting CPUs is still safe.
> 
> How about the code like this?
> 
>   cpu = find_cpu_paging_enabled(env);

If there are more than two cpu's paging is enabled, which cpu should be choosed?
We cannot say which one is better than another one.

>   if (cpu) {
>      /* paging processing based the page table of the found cpu */
>   }
> 
> Note that I of course consider these on the assumption that there's no
> data corruption on the guest.

I know. If the data is corrupted, we should trust the page table.

> 
>>>
>>>> +        for (env = first_cpu; env != NULL; env = env->next_cpu) {
>>>> +            ret = cpu_get_memory_mapping(list, env);
>>>> +            if (ret < 0) {
>>>> +                return -1;
>>>> +            }
>>>> +        }
>>>> +        return 0;
>>>> +    }
>>>> +
>>>> +    /*
>>>> +     * If the guest doesn't use paging, the virtual address is equal to 
>>>> physical
>>>> +     * address.
>>>> +     */
>>>
>>> IIRC, ACPI sleep state goes in real-mode. There might be another that
>>> can go in real-mode. If execution enters this path in such situation,
>>> linear addresses are meaningless. But this is really rare case.
>>
>> I donot meet such case, and I donot know what should I do in this patch now.
>> So I donot change it now.
>>
> 
> Yes, we cannot see this because we are outside the guest kernel as
> long as the guest tells us that. But writing memo about this anyware
> would be necessary for the one case that paging doesn't work well.

OK.

Thanks
Wen Congyang

> 
> Thanks.
> HATAYAMA, Daisuke
> 
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]