qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH] PPC: Get MMU state on register sync


From: Alexander Graf
Subject: [Qemu-devel] Re: [PATCH] PPC: Get MMU state on register sync
Date: Tue, 24 Nov 2009 20:10:41 +0100

On 24.11.2009, at 20:03, Jan Kiszka wrote:

> Alexander Graf wrote:
>> On 24.11.2009, at 19:49, Jan Kiszka wrote:
>> 
>>> Alexander Graf wrote:
>>>> On 24.11.2009, at 19:33, Jan Kiszka wrote:
>>>> 
>>>>> Alexander Graf wrote:
>>>>>> On 24.11.2009, at 19:12, Jan Kiszka wrote:
>>>>>> 
>>>>>>> Alexander Graf wrote:
>>>>>>>> On 24.11.2009, at 19:01, Jan Kiszka wrote:
>>>>>>>> 
>>>>>>>>> Alexander Graf wrote:
>>>>>>>>>> While x86 only needs to sync cr0-4 to know all about its MMU state 
>>>>>>>>>> and enable
>>>>>>>>>> qemu to resolve virtual to physical addresses, we need to sync all 
>>>>>>>>>> of the
>>>>>>>>>> segment registers on PPC to know which mapping we're in.
>>>>>>>>>> 
>>>>>>>>>> So let's grab the segment register contents to be able to use the 
>>>>>>>>>> "x" monitor
>>>>>>>>>> command and also enable the gdbstub to resolve virtual addresses.
>>>>>>>>>> 
>>>>>>>>>> I sent the corresponding KVM patch to the KVM ML some minutes ago.
>>>>>>>>>> 
>>>>>>>>>> Signed-off-by: Alexander Graf <address@hidden>
>>>>>>>>>> ---
>>>>>>>>>> target-ppc/kvm.c |   30 ++++++++++++++++++++++++++++++
>>>>>>>>>> 1 files changed, 30 insertions(+), 0 deletions(-)
>>>>>>>>>> 
>>>>>>>>>> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
>>>>>>>>>> index 4e1c65f..566513f 100644
>>>>>>>>>> --- a/target-ppc/kvm.c
>>>>>>>>>> +++ b/target-ppc/kvm.c
>>>>>>>>>> @@ -98,12 +98,17 @@ int kvm_arch_put_registers(CPUState *env)
>>>>>>>>>> int kvm_arch_get_registers(CPUState *env)
>>>>>>>>>> {
>>>>>>>>>> struct kvm_regs regs;
>>>>>>>>>> +    struct kvm_sregs sregs;
>>>>>>>>>> uint32_t i, ret;
>>>>>>>>>> 
>>>>>>>>>> ret = kvm_vcpu_ioctl(env, KVM_GET_REGS, &regs);
>>>>>>>>>> if (ret < 0)
>>>>>>>>>>     return ret;
>>>>>>>>>> 
>>>>>>>>>> +    ret = kvm_vcpu_ioctl(env, KVM_GET_SREGS, &sregs);
>>>>>>>>>> +    if (ret < 0)
>>>>>>>>>> +        return ret;
>>>>>>>>>> +
>>>>>>>>>> env->ctr = regs.ctr;
>>>>>>>>>> env->lr = regs.lr;
>>>>>>>>>> env->xer = regs.xer;
>>>>>>>>>> @@ -125,6 +130,31 @@ int kvm_arch_get_registers(CPUState *env)
>>>>>>>>>> for (i = 0;i < 32; i++)
>>>>>>>>>>     env->gpr[i] = regs.gpr[i];
>>>>>>>>>> 
>>>>>>>>>> +#ifdef KVM_CAP_PPC_SEGSTATE
>>>>>>>>>> +    if (kvm_check_extension(env->kvm_state, KVM_CAP_PPC_SEGSTATE)) {
>>>>>>>>>> +        env->sdr1 = sregs.sdr1;
>>>>>>>>>> +    
>>>>>>>>>> +        /* Sync SLB */
>>>>>>>>>> +        for (i = 0; i < 64; i++) {
>>>>>>>>>> +            ppc_store_slb(env, sregs.ppc64.slb[i].slbe,
>>>>>>>>>> +                               sregs.ppc64.slb[i].slbv);
>>>>>>>>>> +        }
>>>>>>>>>> +    
>>>>>>>>>> +        /* Sync SRs */
>>>>>>>>>> +        for (i = 0; i < 16; i++) {
>>>>>>>>>> +            env->sr[i] = sregs.ppc32.sr[i];
>>>>>>>>>> +        }
>>>>>>>>>> +    
>>>>>>>>>> +        /* Sync BATs */
>>>>>>>>>> +        for (i = 0; i < 8; i++) {
>>>>>>>>>> +            env->DBAT[0][i] = sregs.ppc32.dbat[i] & 0xffffffff;
>>>>>>>>>> +            env->DBAT[1][i] = sregs.ppc32.dbat[i] >> 32;
>>>>>>>>>> +            env->IBAT[0][i] = sregs.ppc32.ibat[i] & 0xffffffff;
>>>>>>>>>> +            env->IBAT[1][i] = sregs.ppc32.ibat[i] >> 32;
>>>>>>>>>> +        }
>>>>>>>>>> +    }
>>>>>>>>>> +#endif
>>>>>>>>>> +
>>>>>>>>>> return 0;
>>>>>>>>>> }
>>>>>>>>>> 
>>>>>>>>> What about KVM_SET_SREGS in kvm_arch_put_registers? E.g. to play back
>>>>>>>>> potential changes to that special registers someone did via gdb?
>>>>>>>> I don't think you can actually change the segment values. At least I 
>>>>>>>> can't imagine why.
>>>>>>> Dunno about PPC in this regard and how much value it has, but we have
>>>>>>> segment register access via gdb for x86.
>>>>>> The segments here are more like PLM4 on x86.
>>>>> Even that will be settable one day (gdb just do not yet know much about
>>>>> x86 system management registers).
>>>>> 
>>>>>>>> I definitely will implement SET_SREGS as soon as your sync split is 
>>>>>>>> in, as that's IMHO only really required on migration.
>>>>>>>> 
>>>>>>> Migration is, of course, the major use case.
>>>>>>> 
>>>>>>> Still I wonder why not making this API symmetric when already touching 
>>>>>>> it.
>>>>>> I was afraid to introduce performance regressions - setting the segments 
>>>>>> means flushing the complete shadow MMU.
>>>>>> 
>>>>> Unless it costs milliseconds, not really critical, given how often
>>>>> registers are synchronized.
>>>>> 
>>>>> BTW, I noticed that ppc only syncs the SREGS once on init, not on reset
>>>>> - are they static?
>>>> So far SREGS are only used for setting the PVR (cpuid in x86 speech). 
>>>> There's no need to reset that on reset :-).
>>> Then I don't get why you need to re-read them during runtime - user
>>> space should know the state and should be able push it into the CPUState
>>> on init.
>> 
>> Eeh. The SREGS contain:
>> 
>> - PVR
>> - Segment register contents
>> - BATs (another MMU thing for linear direct mapping)
>> 
>> On init we send SREGS to set PVR. Later on sync we get SREGS to get the 
>> segment registers.
>> 
>> You think it would have been better to create a new ioctl?
> 
> No, but I think you might miss a proper reset of some SREGS elements
> when the VM goes through reset. If those states may change during guest
> runtime, a hard reset should send them back into their hard reset state.
> 
> You can do this by adding yet another extraordinary SET_SREGS to the
> reset callback or - that was my original point - by symmetrically adding
> GET_SREGS and SET_SREGS to the register state sync.

At least with the firmware we have now, we start in real mode anyways, then set 
all 16 SRs (or 16 SLB fake entries) and then go into paged mode.

So we're safe for now. As I said I will implement the SET_SREGS in the sync 
when your split is done and I can limit sregs sets to reset.

Alex



reply via email to

[Prev in Thread] Current Thread [Next in Thread]