qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] SMI handler should set the CPL to zero and save


From: Marcel Apfelbaum
Subject: Re: [Qemu-devel] [PATCH] SMI handler should set the CPL to zero and save and restore it on rsm.
Date: Sun, 27 Apr 2014 15:22:46 +0300

On Sat, 2014-04-26 at 11:06 +0200, Paolo Bonzini wrote:
> Il 25/04/2014 19:17, Kevin O'Connor ha scritto:
> > The current SMI interrupt handler is being run with the same CPL as
> > the code it interrupts.  If the existing code is running with CPL=3,
> > then the SMI handler can cause spurious exceptions.  The System
> > Management Mode (SMM) should always run at the highest protection
> > level.
> 
> KVM computes the CPL as follows:
> 
> if (CR0.PE == 0)
>    return 0;
> 
> if (!EFER.LMA && EFLAGS.VM)
>    return 3;
> 
> return CS.selector & 3;

Hi Paolo,

The above algorithm is correct only for the protected mode, right?
For real-address mode is not correct (taken from the Intel Dev Manual
and not from my limited knowledge).
Why don't we use the value of the DPL field from SS which is always
equal to the logical processor’s CPL?
Of course, there is only a short period of time the processor is not on 
protected
mode, but in this time is is possible that the CS segment selector is changed
and the CPL with it...

Any thoughts? Makes sense to change the way the KVM computes the CPL?

Thanks,
Marcel

> 
> Perhaps this can be used instead of save/restore of the CPL field.
> 
> Paolo
> 
> > Signed-off-by: Kevin O'Connor <address@hidden>
> > ---
> >
> > I'm not very familiar with the QEMU TCG code, so it is possible there
> > is a better way to accomplish this.  I can confirm that without this
> > patch an extended SeaBIOS SMI handler can cause spurious faults, but
> > it works correctly with this patch.
> >
> > ---
> >  target-i386/smm_helper.c | 5 +++++
> >  1 file changed, 5 insertions(+)
> >
> > diff --git a/target-i386/smm_helper.c b/target-i386/smm_helper.c
> > index 35901c9..ad5abf2 100644
> > --- a/target-i386/smm_helper.c
> > +++ b/target-i386/smm_helper.c
> > @@ -66,6 +66,7 @@ void do_smm_enter(X86CPU *cpu)
> >          stq_phys(cs->as, sm_state + offset + 8, dt->base);
> >      }
> >
> > +    stw_phys(cs->as, sm_state + 0x7e62, env->hflags & HF_CPL_MASK);
> >      stq_phys(cs->as, sm_state + 0x7e68, env->gdt.base);
> >      stl_phys(cs->as, sm_state + 0x7e64, env->gdt.limit);
> >
> > @@ -134,6 +135,7 @@ void do_smm_enter(X86CPU *cpu)
> >
> >      stl_phys(cs->as, sm_state + 0x7f74, env->gdt.base);
> >      stl_phys(cs->as, sm_state + 0x7f70, env->gdt.limit);
> > +    stw_phys(cs->as, sm_state + 0x7f6e, env->hflags & HF_CPL_MASK);
> >
> >      stl_phys(cs->as, sm_state + 0x7f58, env->idt.base);
> >      stl_phys(cs->as, sm_state + 0x7f54, env->idt.limit);
> > @@ -163,6 +165,7 @@ void do_smm_enter(X86CPU *cpu)
> >      cpu_load_eflags(env, 0, ~(CC_O | CC_S | CC_Z | CC_A | CC_P | CC_C |
> >                                DF_MASK));
> >      env->eip = 0x00008000;
> > +    cpu_x86_set_cpl(env, 0);
> >      cpu_x86_load_seg_cache(env, R_CS, (env->smbase >> 4) & 0xffff, 
> > env->smbase,
> >                             0xffffffff, 0);
> >      cpu_x86_load_seg_cache(env, R_DS, 0, 0, 0xffffffff, 0);
> > @@ -201,6 +204,7 @@ void helper_rsm(CPUX86State *env)
> >                                  0xf0ff) << 8);
> >      }
> >
> > +    cpu_x86_set_cpl(env, lduw_phys(cs->as, sm_state + 0x7e62) & 
> > HF_CPL_MASK);
> >      env->gdt.base = ldq_phys(cs->as, sm_state + 0x7e68);
> >      env->gdt.limit = ldl_phys(cs->as, sm_state + 0x7e64);
> >
> > @@ -271,6 +275,7 @@ void helper_rsm(CPUX86State *env)
> >
> >      env->gdt.base = ldl_phys(cs->as, sm_state + 0x7f74);
> >      env->gdt.limit = ldl_phys(cs->as, sm_state + 0x7f70);
> > +    cpu_x86_set_cpl(env, lduw_phys(cs->as, sm_state + 0x7f6e) & 
> > HF_CPL_MASK);
> >
> >      env->idt.base = ldl_phys(cs->as, sm_state + 0x7f58);
> >      env->idt.limit = ldl_phys(cs->as, sm_state + 0x7f54);
> >
> 
> 






reply via email to

[Prev in Thread] Current Thread [Next in Thread]