[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-ppc] [PATCH -V4 2/4] target-ppc: Fix page table lookup with kv
From: |
Aneesh Kumar K.V |
Subject: |
Re: [Qemu-ppc] [PATCH -V4 2/4] target-ppc: Fix page table lookup with kvm enabled |
Date: |
Tue, 01 Oct 2013 06:57:51 +0530 |
User-agent: |
Notmuch/0.16+84~gcbbf53e (http://notmuchmail.org) Emacs/24.2.1 (x86_64-pc-linux-gnu) |
Alexander Graf <address@hidden> writes:
> On 09/05/2013 10:16 AM, Aneesh Kumar K.V wrote:
>> From: "Aneesh Kumar K.V"<address@hidden>
>>
>> With kvm enabled, we store the hash page table information in the hypervisor.
>> Use ioctl to read the htab contents. Without this we get the below error when
>> trying to read the guest address
>>
>> (gdb) x/10 do_fork
>> 0xc000000000098660<do_fork>: Cannot access memory at address
>> 0xc000000000098660
>> (gdb)
>>
>> Signed-off-by: Aneesh Kumar K.V<address@hidden>
>> ---
>> target-ppc/kvm.c | 59
>> +++++++++++++++++++++++++++++++++++++++++++++++++
>> target-ppc/kvm_ppc.h | 12 +++++++++-
>> target-ppc/mmu-hash64.c | 57
>> ++++++++++++++++++++++++++++-------------------
>> 3 files changed, 104 insertions(+), 24 deletions(-)
>>
>> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
>> index 1838465..05b066c 100644
>> --- a/target-ppc/kvm.c
>> +++ b/target-ppc/kvm.c
>> @@ -1888,3 +1888,62 @@ int kvm_arch_on_sigbus(int code, void *addr)
>> void kvm_arch_init_irq_routing(KVMState *s)
>> {
>> }
>> +
>> +hwaddr kvmppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash,
>> + bool secondary, target_ulong ptem,
>> + target_ulong *hpte0, target_ulong *hpte1)
>> +{
>> + int htab_fd;
>> + uint64_t index;
>> + hwaddr pte_offset;
>> + target_ulong pte0, pte1;
>> + struct kvm_get_htab_fd ghf;
>> + struct kvm_get_htab_buf {
>> + struct kvm_get_htab_header header;
>> + /*
>> + * Older kernel required one extra byte.
>
> Older than what?
>
>> + */
Since we decided to drop that kernel patch, that should be updated as
"kernel requires one extra byte".
>> + unsigned long hpte[(HPTES_PER_GROUP * 2) + 1];
>> + } hpte_buf;
>> +
>> + index = (hash * HPTES_PER_GROUP)& cpu->env.htab_mask;
>> + *hpte0 = 0;
>> + *hpte1 = 0;
>> + if (!cap_htab_fd) {
>> + return 0;
>> + }
>> +
.....
>>
>> -static hwaddr ppc_hash64_pteg_search(CPUPPCState *env, hwaddr pteg_off,
>> +static hwaddr ppc_hash64_pteg_search(CPUPPCState *env, hwaddr hash,
>> bool secondary, target_ulong ptem,
>> ppc_hash_pte64_t *pte)
>> {
>> - hwaddr pte_offset = pteg_off;
>> + hwaddr pte_offset;
>> target_ulong pte0, pte1;
>> - int i;
>> -
>> - for (i = 0; i< HPTES_PER_GROUP; i++) {
>> - pte0 = ppc_hash64_load_hpte0(env, pte_offset);
>> - pte1 = ppc_hash64_load_hpte1(env, pte_offset);
>> -
>> - if ((pte0& HPTE64_V_VALID)
>> -&& (secondary == !!(pte0& HPTE64_V_SECONDARY))
>> -&& HPTE64_V_COMPARE(pte0, ptem)) {
>> - pte->pte0 = pte0;
>> - pte->pte1 = pte1;
>> - return pte_offset;
>> + int i, ret = 0;
>> +
>> + if (kvm_enabled()) {
>> + ret = kvmppc_hash64_pteg_search(ppc_env_get_cpu(env), hash,
>> + secondary, ptem,
>> +&pte->pte0,&pte->pte1);
>
> Instead of duplicating the search, couldn't you just hook yourself into
> ppc_hash64_load_hpte0/1 and return the respective ptes from there? Just
> cache the current pteg to ensure things don't become dog slow.
>
Can you explain this better ?
-aneesh