qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] SMI handler should set the CPL to zero and save


From: Kevin O'Connor
Subject: Re: [Qemu-devel] [PATCH] SMI handler should set the CPL to zero and save and restore it on rsm.
Date: Sat, 26 Apr 2014 15:36:27 -0400
User-agent: Mutt/1.5.21 (2010-09-15)

On Sat, Apr 26, 2014 at 11:06:53AM +0200, Paolo Bonzini wrote:
> Il 25/04/2014 19:17, Kevin O'Connor ha scritto:
> >The current SMI interrupt handler is being run with the same CPL as
> >the code it interrupts.  If the existing code is running with CPL=3,
> >then the SMI handler can cause spurious exceptions.  The System
> >Management Mode (SMM) should always run at the highest protection
> >level.
> 
> KVM computes the CPL as follows:
> 
> if (CR0.PE == 0)
>   return 0;
> 
> if (!EFER.LMA && EFLAGS.VM)
>   return 3;
> 
> return CS.selector & 3;
> 
> Perhaps this can be used instead of save/restore of the CPL field.
[...]
> >I'm not very familiar with the QEMU TCG code, so it is possible there
> >is a better way to accomplish this.  I can confirm that without this
> >patch an extended SeaBIOS SMI handler can cause spurious faults, but
> >it works correctly with this patch.

Yes, I was thinking of something like that as well.  If QEMU
internally observes the formula above, then something like the patch
below should work instead of my original patch.

However, I'm not an expert on QEMU TCG and the patch below would
require much more testing.  (As a side note, the patch below also
obviates the need to save/restore cpl in the svm code, but I left
saving of cpl in the "hsave page" in case that is part of some
standard.)

-Kevin


>From e1f9ebc7154f82a9d7507cc0bd09aa02f209b7ed Mon Sep 17 00:00:00 2001
From: Kevin O'Connor <address@hidden>
Date: Sat, 26 Apr 2014 15:20:38 -0400
Subject: [PATCH] The x86 CPL is stored in CS.selector - auto update hflags
 accordingly.

Instead of manually calling cpu_x86_set_cpl() when the CPL changes,
check for CPL changes on calls to cpu_x86_load_seg_cache(R_CS).  Every
location that called cpu_x86_set_cpl() also called
cpu_x86_load_seg_cache(R_CS), so cpu_x86_set_cpl() is no longer
required.

This fixes the SMM handler code as it was not setting/restoring the
CPL level manually.

A few places in the code have been reordered so that eflags/cr0
changes occur before calling cpu_x86_load_seg_cache(R_CS).  This is
done because the CPL level depends on eflags/cr0.  However, this
change should be done anyway, because cpu_x86_load_seg_cache() already
uses eflags/cr0 to determine other hflags.

Signed-off-by: Kevin O'Connor <address@hidden>
---
 bsd-user/main.c          |  2 --
 linux-user/main.c        |  2 --
 target-i386/cpu.h        | 32 ++++++++++++++---------------
 target-i386/seg_helper.c | 53 +++++++++++++++++-------------------------------
 target-i386/smm_helper.c | 20 +++++++++---------
 target-i386/svm_helper.c |  7 ++-----
 6 files changed, 46 insertions(+), 70 deletions(-)

diff --git a/bsd-user/main.c b/bsd-user/main.c
index f81ba55..9f895b4 100644
--- a/bsd-user/main.c
+++ b/bsd-user/main.c
@@ -1003,8 +1003,6 @@ int main(int argc, char **argv)
     cpu->opaque = ts;
 
 #if defined(TARGET_I386)
-    cpu_x86_set_cpl(env, 3);
-
     env->cr[0] = CR0_PG_MASK | CR0_WP_MASK | CR0_PE_MASK;
     env->hflags |= HF_PE_MASK;
     if (env->features[FEAT_1_EDX] & CPUID_SSE) {
diff --git a/linux-user/main.c b/linux-user/main.c
index 947358a..9f7cfe4 100644
--- a/linux-user/main.c
+++ b/linux-user/main.c
@@ -4051,8 +4051,6 @@ int main(int argc, char **argv, char **envp)
 #endif
 
 #if defined(TARGET_I386)
-    cpu_x86_set_cpl(env, 3);
-
     env->cr[0] = CR0_PG_MASK | CR0_WP_MASK | CR0_PE_MASK;
     env->hflags |= HF_PE_MASK;
     if (env->features[FEAT_1_EDX] & CPUID_SSE) {
diff --git a/target-i386/cpu.h b/target-i386/cpu.h
index 2a22a7d..6360794 100644
--- a/target-i386/cpu.h
+++ b/target-i386/cpu.h
@@ -124,9 +124,9 @@
 #define ID_MASK                 0x00200000
 
 /* hidden flags - used internally by qemu to represent additional cpu
-   states. Only the CPL, INHIBIT_IRQ, SMM and SVMI are not
-   redundant. We avoid using the IOPL_MASK, TF_MASK, VM_MASK and AC_MASK
-   bit positions to ease oring with eflags. */
+   states. Only the INHIBIT_IRQ, SMM and SVMI are not redundant. We
+   avoid using the IOPL_MASK, TF_MASK, VM_MASK and AC_MASK bit
+   positions to ease oring with eflags. */
 /* current cpl */
 #define HF_CPL_SHIFT         0
 /* true if soft mmu is being used */
@@ -974,19 +974,27 @@ static inline void cpu_x86_load_seg_cache(CPUX86State 
*env,
     /* update the hidden flags */
     {
         if (seg_reg == R_CS) {
+#if HF_CPL_MASK != 3
+#error HF_CPL_MASK is hardcoded
+#endif
+            int cpl = selector & 3;
 #ifdef TARGET_X86_64
             if ((env->hflags & HF_LMA_MASK) && (flags & DESC_L_MASK)) {
                 /* long mode */
-                env->hflags |= HF_CS32_MASK | HF_SS32_MASK | HF_CS64_MASK;
-                env->hflags &= ~(HF_ADDSEG_MASK);
+                env->hflags &= ~(HF_ADDSEG_MASK | HF_CPL_MASK);
+                env->hflags |= HF_CS32_MASK | HF_SS32_MASK | HF_CS64_MASK | 
cpl;
             } else
 #endif
             {
                 /* legacy / compatibility case */
+                if (!(env->cr[0] & CR0_PE_MASK))
+                    cpl = 0;
+                else if (env->eflags & VM_MASK)
+                    cpl = 3;
                 new_hflags = (env->segs[R_CS].flags & DESC_B_MASK)
                     >> (DESC_B_SHIFT - HF_CS32_SHIFT);
-                env->hflags = (env->hflags & ~(HF_CS32_MASK | HF_CS64_MASK)) |
-                    new_hflags;
+                env->hflags &= ~(HF_CS32_MASK | HF_CS64_MASK | HF_CPL_MASK);
+                env->hflags |= new_hflags | cpl;
             }
         }
         new_hflags = (env->segs[R_SS].flags & DESC_B_MASK)
@@ -1031,16 +1039,6 @@ int cpu_x86_get_descr_debug(CPUX86State *env, unsigned 
int selector,
                             target_ulong *base, unsigned int *limit,
                             unsigned int *flags);
 
-/* wrapper, just in case memory mappings must be changed */
-static inline void cpu_x86_set_cpl(CPUX86State *s, int cpl)
-{
-#if HF_CPL_MASK == 3
-    s->hflags = (s->hflags & ~HF_CPL_MASK) | cpl;
-#else
-#error HF_CPL_MASK is hardcoded
-#endif
-}
-
 /* op_helper.c */
 /* used for debug or cpu save/restore */
 void cpu_get_fp80(uint64_t *pmant, uint16_t *pexp, floatx80 f);
diff --git a/target-i386/seg_helper.c b/target-i386/seg_helper.c
index 8c3f92c..4f11dc4 100644
--- a/target-i386/seg_helper.c
+++ b/target-i386/seg_helper.c
@@ -409,11 +409,7 @@ static void switch_tss(CPUX86State *env, int tss_selector,
         for (i = 0; i < 6; i++) {
             load_seg_vm(env, i, new_segs[i]);
         }
-        /* in vm86, CPL is always 3 */
-        cpu_x86_set_cpl(env, 3);
     } else {
-        /* CPL is set the RPL of CS */
-        cpu_x86_set_cpl(env, new_segs[R_CS] & 3);
         /* first just selectors as the rest may trigger exceptions */
         for (i = 0; i < 6; i++) {
             cpu_x86_load_seg_cache(env, i, new_segs[i], 0, 0, 0);
@@ -752,19 +748,18 @@ static void do_interrupt_protected(CPUX86State *env, int 
intno, int is_int,
     }
     SET_ESP(esp, sp_mask);
 
+    /* interrupt gate clear IF mask */
+    if ((type & 1) == 0) {
+        env->eflags &= ~IF_MASK;
+    }
+    env->eflags &= ~(TF_MASK | VM_MASK | RF_MASK | NT_MASK);
+
     selector = (selector & ~3) | dpl;
     cpu_x86_load_seg_cache(env, R_CS, selector,
                    get_seg_base(e1, e2),
                    get_seg_limit(e1, e2),
                    e2);
-    cpu_x86_set_cpl(env, dpl);
     env->eip = offset;
-
-    /* interrupt gate clear IF mask */
-    if ((type & 1) == 0) {
-        env->eflags &= ~IF_MASK;
-    }
-    env->eflags &= ~(TF_MASK | VM_MASK | RF_MASK | NT_MASK);
 }
 
 #ifdef TARGET_X86_64
@@ -917,19 +912,18 @@ static void do_interrupt64(CPUX86State *env, int intno, 
int is_int,
     }
     env->regs[R_ESP] = esp;
 
+    /* interrupt gate clear IF mask */
+    if ((type & 1) == 0) {
+        env->eflags &= ~IF_MASK;
+    }
+    env->eflags &= ~(TF_MASK | VM_MASK | RF_MASK | NT_MASK);
+
     selector = (selector & ~3) | dpl;
     cpu_x86_load_seg_cache(env, R_CS, selector,
                    get_seg_base(e1, e2),
                    get_seg_limit(e1, e2),
                    e2);
-    cpu_x86_set_cpl(env, dpl);
     env->eip = offset;
-
-    /* interrupt gate clear IF mask */
-    if ((type & 1) == 0) {
-        env->eflags &= ~IF_MASK;
-    }
-    env->eflags &= ~(TF_MASK | VM_MASK | RF_MASK | NT_MASK);
 }
 #endif
 
@@ -960,7 +954,8 @@ void helper_syscall(CPUX86State *env, int next_eip_addend)
 
         code64 = env->hflags & HF_CS64_MASK;
 
-        cpu_x86_set_cpl(env, 0);
+        env->eflags &= ~env->fmask;
+        cpu_load_eflags(env, env->eflags, 0);
         cpu_x86_load_seg_cache(env, R_CS, selector & 0xfffc,
                            0, 0xffffffff,
                                DESC_G_MASK | DESC_P_MASK |
@@ -972,8 +967,6 @@ void helper_syscall(CPUX86State *env, int next_eip_addend)
                                DESC_G_MASK | DESC_B_MASK | DESC_P_MASK |
                                DESC_S_MASK |
                                DESC_W_MASK | DESC_A_MASK);
-        env->eflags &= ~env->fmask;
-        cpu_load_eflags(env, env->eflags, 0);
         if (code64) {
             env->eip = env->lstar;
         } else {
@@ -982,7 +975,7 @@ void helper_syscall(CPUX86State *env, int next_eip_addend)
     } else {
         env->regs[R_ECX] = (uint32_t)(env->eip + next_eip_addend);
 
-        cpu_x86_set_cpl(env, 0);
+        env->eflags &= ~(IF_MASK | RF_MASK | VM_MASK);
         cpu_x86_load_seg_cache(env, R_CS, selector & 0xfffc,
                            0, 0xffffffff,
                                DESC_G_MASK | DESC_B_MASK | DESC_P_MASK |
@@ -993,7 +986,6 @@ void helper_syscall(CPUX86State *env, int next_eip_addend)
                                DESC_G_MASK | DESC_B_MASK | DESC_P_MASK |
                                DESC_S_MASK |
                                DESC_W_MASK | DESC_A_MASK);
-        env->eflags &= ~(IF_MASK | RF_MASK | VM_MASK);
         env->eip = (uint32_t)env->star;
     }
 }
@@ -1014,6 +1006,9 @@ void helper_sysret(CPUX86State *env, int dflag)
     }
     selector = (env->star >> 48) & 0xffff;
     if (env->hflags & HF_LMA_MASK) {
+        cpu_load_eflags(env, (uint32_t)(env->regs[11]), TF_MASK | AC_MASK
+                        | ID_MASK | IF_MASK | IOPL_MASK | VM_MASK | RF_MASK |
+                        NT_MASK);
         if (dflag == 2) {
             cpu_x86_load_seg_cache(env, R_CS, (selector + 16) | 3,
                                    0, 0xffffffff,
@@ -1035,11 +1030,8 @@ void helper_sysret(CPUX86State *env, int dflag)
                                DESC_G_MASK | DESC_B_MASK | DESC_P_MASK |
                                DESC_S_MASK | (3 << DESC_DPL_SHIFT) |
                                DESC_W_MASK | DESC_A_MASK);
-        cpu_load_eflags(env, (uint32_t)(env->regs[11]), TF_MASK | AC_MASK
-                        | ID_MASK | IF_MASK | IOPL_MASK | VM_MASK | RF_MASK |
-                        NT_MASK);
-        cpu_x86_set_cpl(env, 3);
     } else {
+        env->eflags |= IF_MASK;
         cpu_x86_load_seg_cache(env, R_CS, selector | 3,
                                0, 0xffffffff,
                                DESC_G_MASK | DESC_B_MASK | DESC_P_MASK |
@@ -1051,8 +1043,6 @@ void helper_sysret(CPUX86State *env, int dflag)
                                DESC_G_MASK | DESC_B_MASK | DESC_P_MASK |
                                DESC_S_MASK | (3 << DESC_DPL_SHIFT) |
                                DESC_W_MASK | DESC_A_MASK);
-        env->eflags |= IF_MASK;
-        cpu_x86_set_cpl(env, 3);
     }
 }
 #endif
@@ -1905,7 +1895,6 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, 
target_ulong new_eip,
                        get_seg_base(e1, e2),
                        get_seg_limit(e1, e2),
                        e2);
-        cpu_x86_set_cpl(env, dpl);
         SET_ESP(sp, sp_mask);
         env->eip = offset;
     }
@@ -2134,7 +2123,6 @@ static inline void helper_ret_protected(CPUX86State *env, 
int shift,
                        get_seg_base(e1, e2),
                        get_seg_limit(e1, e2),
                        e2);
-        cpu_x86_set_cpl(env, rpl);
         sp = new_esp;
 #ifdef TARGET_X86_64
         if (env->hflags & HF_CS64_MASK) {
@@ -2185,7 +2173,6 @@ static inline void helper_ret_protected(CPUX86State *env, 
int shift,
                     IF_MASK | IOPL_MASK | VM_MASK | NT_MASK | VIF_MASK |
                     VIP_MASK);
     load_seg_vm(env, R_CS, new_cs & 0xffff);
-    cpu_x86_set_cpl(env, 3);
     load_seg_vm(env, R_SS, new_ss & 0xffff);
     load_seg_vm(env, R_ES, new_es & 0xffff);
     load_seg_vm(env, R_DS, new_ds & 0xffff);
@@ -2238,7 +2225,6 @@ void helper_sysenter(CPUX86State *env)
         raise_exception_err(env, EXCP0D_GPF, 0);
     }
     env->eflags &= ~(VM_MASK | IF_MASK | RF_MASK);
-    cpu_x86_set_cpl(env, 0);
 
 #ifdef TARGET_X86_64
     if (env->hflags & HF_LMA_MASK) {
@@ -2274,7 +2260,6 @@ void helper_sysexit(CPUX86State *env, int dflag)
     if (env->sysenter_cs == 0 || cpl != 0) {
         raise_exception_err(env, EXCP0D_GPF, 0);
     }
-    cpu_x86_set_cpl(env, 3);
 #ifdef TARGET_X86_64
     if (dflag == 2) {
         cpu_x86_load_seg_cache(env, R_CS, ((env->sysenter_cs + 32) & 0xfffc) |
diff --git a/target-i386/smm_helper.c b/target-i386/smm_helper.c
index 35901c9..3e94391 100644
--- a/target-i386/smm_helper.c
+++ b/target-i386/smm_helper.c
@@ -191,16 +191,6 @@ void helper_rsm(CPUX86State *env)
 #ifdef TARGET_X86_64
     cpu_load_efer(env, ldq_phys(cs->as, sm_state + 0x7ed0));
 
-    for (i = 0; i < 6; i++) {
-        offset = 0x7e00 + i * 16;
-        cpu_x86_load_seg_cache(env, i,
-                               lduw_phys(cs->as, sm_state + offset),
-                               ldq_phys(cs->as, sm_state + offset + 8),
-                               ldl_phys(cs->as, sm_state + offset + 4),
-                               (lduw_phys(cs->as, sm_state + offset + 2) &
-                                0xf0ff) << 8);
-    }
-
     env->gdt.base = ldq_phys(cs->as, sm_state + 0x7e68);
     env->gdt.limit = ldl_phys(cs->as, sm_state + 0x7e64);
 
@@ -238,6 +228,16 @@ void helper_rsm(CPUX86State *env)
     cpu_x86_update_cr3(env, ldl_phys(cs->as, sm_state + 0x7f50));
     cpu_x86_update_cr0(env, ldl_phys(cs->as, sm_state + 0x7f58));
 
+    for (i = 0; i < 6; i++) {
+        offset = 0x7e00 + i * 16;
+        cpu_x86_load_seg_cache(env, i,
+                               lduw_phys(cs->as, sm_state + offset),
+                               ldq_phys(cs->as, sm_state + offset + 8),
+                               ldl_phys(cs->as, sm_state + offset + 4),
+                               (lduw_phys(cs->as, sm_state + offset + 2) &
+                                0xf0ff) << 8);
+    }
+
     val = ldl_phys(cs->as, sm_state + 0x7efc); /* revision ID */
     if (val & 0x20000) {
         env->smbase = ldl_phys(cs->as, sm_state + 0x7f00) & ~0x7fff;
diff --git a/target-i386/svm_helper.c b/target-i386/svm_helper.c
index aa17ecd..02f6b34 100644
--- a/target-i386/svm_helper.c
+++ b/target-i386/svm_helper.c
@@ -282,9 +282,6 @@ void helper_vmrun(CPUX86State *env, int aflag, int 
next_eip_addend)
                           env->vm_vmcb + offsetof(struct vmcb, save.dr7));
     env->dr[6] = ldq_phys(cs->as,
                           env->vm_vmcb + offsetof(struct vmcb, save.dr6));
-    cpu_x86_set_cpl(env, ldub_phys(cs->as,
-                                   env->vm_vmcb + offsetof(struct vmcb,
-                                                           save.cpl)));
 
     /* FIXME: guest state consistency checks */
 
@@ -703,7 +700,8 @@ void helper_vmexit(CPUX86State *env, uint32_t exit_code, 
uint64_t exit_info_1)
     cpu_load_eflags(env, ldq_phys(cs->as,
                                   env->vm_hsave + offsetof(struct vmcb,
                                                            save.rflags)),
-                    ~(CC_O | CC_S | CC_Z | CC_A | CC_P | CC_C | DF_MASK));
+                    ~(CC_O | CC_S | CC_Z | CC_A | CC_P | CC_C | DF_MASK |
+                      VM_MASK));
     CC_OP = CC_OP_EFLAGS;
 
     svm_load_seg_cache(env, env->vm_hsave + offsetof(struct vmcb, save.es),
@@ -728,7 +726,6 @@ void helper_vmexit(CPUX86State *env, uint32_t exit_code, 
uint64_t exit_info_1)
                           env->vm_hsave + offsetof(struct vmcb, save.dr7));
 
     /* other setups */
-    cpu_x86_set_cpl(env, 0);
     stq_phys(cs->as, env->vm_vmcb + offsetof(struct vmcb, control.exit_code),
              exit_code);
     stq_phys(cs->as, env->vm_vmcb + offsetof(struct vmcb, control.exit_info_1),
-- 
1.9.0




reply via email to

[Prev in Thread] Current Thread [Next in Thread]