qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] SMI handler should set the CPL to zero and save


From: Kevin O'Connor
Subject: Re: [Qemu-devel] [PATCH] SMI handler should set the CPL to zero and save and restore it on rsm.
Date: Sun, 27 Apr 2014 12:54:16 -0400
User-agent: Mutt/1.5.21 (2010-09-15)

On Sun, Apr 27, 2014 at 08:10:39AM +0200, Paolo Bonzini wrote:
> Il 26/04/2014 21:36, Kevin O'Connor ha scritto:
> >Yes, I was thinking of something like that as well.  If QEMU
> >internally observes the formula above, then something like the patch
> >below should work instead of my original patch.
> >
> >However, I'm not an expert on QEMU TCG and the patch below would
> >require much more testing.
> 
> Yeah, the patch is obviously more complex.  On the other hand as you
> point out the code to set hflags was already relying on correct
> eflags as a precondition.
> 
> >(As a side note, the patch below also
> >obviates the need to save/restore cpl in the svm code, but I left
> >saving of cpl in the "hsave page" in case that is part of some
> >standard.)
> 
> Yes, the hsave format is part of the AMD virtualization extensions,
> and KVM for one relies on the CPL field.

Thanks for reviewing.

I looked up the AMD hsave stuff, and it looks like the format is
processor dependent, so we could change the layout.  However, if kvm
uses this code and wants the cpl then it's best to leave it there.

> >+#if HF_CPL_MASK != 3
> >+#error HF_CPL_MASK is hardcoded
> >+#endif
> >+            int cpl = selector & 3;
> > #ifdef TARGET_X86_64
> >             if ((env->hflags & HF_LMA_MASK) && (flags & DESC_L_MASK)) {
> >                 /* long mode */
> >-                env->hflags |= HF_CS32_MASK | HF_SS32_MASK | HF_CS64_MASK;
> >-                env->hflags &= ~(HF_ADDSEG_MASK);
> >+                env->hflags &= ~(HF_ADDSEG_MASK | HF_CPL_MASK);
> >+                env->hflags |= HF_CS32_MASK | HF_SS32_MASK | HF_CS64_MASK | 
> >cpl;
> >             } else
> > #endif
> >             {
> >                 /* legacy / compatibility case */
> >+                if (!(env->cr[0] & CR0_PE_MASK))
> >+                    cpl = 0;
> >+                else if (env->eflags & VM_MASK)
> >+                    cpl = 3;
> >                 new_hflags = (env->segs[R_CS].flags & DESC_B_MASK)
> >                     >> (DESC_B_SHIFT - HF_CS32_SHIFT);
> >-                env->hflags = (env->hflags & ~(HF_CS32_MASK | 
> >HF_CS64_MASK)) |
> >-                    new_hflags;
> >+                env->hflags &= ~(HF_CS32_MASK | HF_CS64_MASK | HF_CPL_MASK);
> >+                env->hflags |= new_hflags | cpl;
> 
> Shouldn't the "env->hflags = (env->hflags & ~HF_CPL_MASK) | cpl" be
> out of the "if"?  (The "#if" could be placed there too).

We could do that, but I think it would be extra runtime instructions.


> >@@ -703,7 +700,8 @@ void helper_vmexit(CPUX86State *env, uint32_t exit_code, 
> >uint64_t exit_info_1)
> >     cpu_load_eflags(env, ldq_phys(cs->as,
> >                                   env->vm_hsave + offsetof(struct vmcb,
> >                                                            save.rflags)),
> >-                    ~(CC_O | CC_S | CC_Z | CC_A | CC_P | CC_C | DF_MASK));
> >+                    ~(CC_O | CC_S | CC_Z | CC_A | CC_P | CC_C | DF_MASK |
> >+                      VM_MASK));
> 
> Also, VM_MASK is already cleared below:
> 
>     /* Forces CR0.PE = 1, RFLAGS.VM = 0. */
>     env->cr[0] |= CR0_PE_MASK;
>     env->eflags &= ~VM_MASK;
> 
> You need to move both of them above, not just env->eflags.  For
> CR0.PE, CR0_PE_MASK should be ORed to cpu_x86_update_cr0's argument.

CR0_PE_MASK is already ORed into cpu_x86_update_cr0() in
helper_vmexit().  The code block you cite above looked to be just
paranoia, so I left it in.  But I can also remove it.


> >     CC_OP = CC_OP_EFLAGS;
> 
> This CC_OP assignment should be moved to cpu_load_eflags (unrelated bug).

I'm not famliar with CC_OP so I'd prefer not to tangle with that in
this change.


> Can you send v2 of the patch with the movements split across
> multiple patches (followed by a last patch to kill cpu_x86_set_cpl)?

Makes sense.

Thanks again,
-Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]