[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH 09/10] target/i386/tcg: use X86Access for TSS access
From: |
Paolo Bonzini |
Subject: |
Re: [PATCH 09/10] target/i386/tcg: use X86Access for TSS access |
Date: |
Thu, 11 Jul 2024 08:28:39 +0200 |
User-agent: |
Mozilla Thunderbird |
On 7/10/24 20:40, Paolo Bonzini wrote:
Il mer 10 lug 2024, 18:47 Richard Henderson
<richard.henderson@linaro.org <mailto:richard.henderson@linaro.org>> ha
scritto:
On 7/9/24 23:29, Paolo Bonzini wrote:
> This takes care of probing the vaddr range in advance, and is
also faster
> because it avoids repeated TLB lookups. It also matches the
Intel manual
> better, as it says "Checks that the current (old) TSS, new TSS,
and all
> segment descriptors used in the task switch are paged into system
memory";
> note however that it's not clear how the processor checks for segment
> descriptors, and this check is not included in the AMD manual.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com
<mailto:pbonzini@redhat.com>>
> ---
> target/i386/tcg/seg_helper.c | 101
++++++++++++++++++-----------------
> 1 file changed, 51 insertions(+), 50 deletions(-)
>
> diff --git a/target/i386/tcg/seg_helper.c
b/target/i386/tcg/seg_helper.c
> index 25af9d4a4ec..77f2c65c3cf 100644
> --- a/target/i386/tcg/seg_helper.c
> +++ b/target/i386/tcg/seg_helper.c
> @@ -311,35 +313,44 @@ static int switch_tss_ra(CPUX86State *env,
int tss_selector,
> raise_exception_err_ra(env, EXCP0A_TSS, tss_selector &
0xfffc, retaddr);
> }
>
> + /* X86Access avoids memory exceptions during the task switch */
> + access_prepare_mmu(&old, env, env->tr.base, old_tss_limit_max,
> + MMU_DATA_STORE, cpu_mmu_index_kernel(env),
retaddr);
> +
> + if (source == SWITCH_TSS_CALL) {
> + /* Probe for future write of parent task */
> + probe_access(env, tss_base, 2, MMU_DATA_STORE,
> + cpu_mmu_index_kernel(env), retaddr);
> + }
> + access_prepare_mmu(&new, env, tss_base, tss_limit,
> + MMU_DATA_LOAD, cpu_mmu_index_kernel(env),
retaddr);
You're computing cpu_mmu_index_kernel 3 times.
Squashing this in (easier to review than the whole thing):
diff --git a/target/i386/tcg/seg_helper.c b/target/i386/tcg/seg_helper.c
index 4123ff1245e..4edfd26135f 100644
--- a/target/i386/tcg/seg_helper.c
+++ b/target/i386/tcg/seg_helper.c
@@ -321,7 +321,7 @@ static void switch_tss_ra(CPUX86State *env, int
tss_selector,
uint32_t new_eflags, new_eip, new_cr3, new_ldt, new_trap;
uint32_t old_eflags, eflags_mask;
SegmentCache *dt;
- int index;
+ int mmu_index, index;
target_ulong ptr;
X86Access old, new;
@@ -378,16 +378,17 @@ static void switch_tss_ra(CPUX86State *env, int tss_selector,
}
/* X86Access avoids memory exceptions during the task switch */
+ mmu_index = cpu_mmu_index_kernel(env);
access_prepare_mmu(&old, env, env->tr.base, old_tss_limit_max,
- MMU_DATA_STORE, cpu_mmu_index_kernel(env), retaddr);
+ MMU_DATA_STORE, mmu_index, retaddr);
if (source == SWITCH_TSS_CALL) {
/* Probe for future write of parent task */
probe_access(env, tss_base, 2, MMU_DATA_STORE,
- cpu_mmu_index_kernel(env), retaddr);
+ mmu_index, retaddr);
}
access_prepare_mmu(&new, env, tss_base, tss_limit,
- MMU_DATA_LOAD, cpu_mmu_index_kernel(env), retaddr);
+ MMU_DATA_LOAD, mmu_index, retaddr);
/* read all the registers from the new TSS */
if (type & 8) {
@@ -468,7 +469,11 @@ static void switch_tss_ra(CPUX86State *env, int
tss_selector,
context */
if (source == SWITCH_TSS_CALL) {
- cpu_stw_kernel_ra(env, tss_base, env->tr.selector, retaddr);
+ /*
+ * Thanks to the probe_access above, we know the first two
+ * bytes addressed by &new are writable too.
+ */
+ access_stw(&new, tss_base, env->tr.selector);
new_eflags |= NT_MASK;
}
Paolo
- [PATCH 05/10] target/i386/tcg: Introduce x86_mmu_index_{kernel_,}pl, (continued)
- [PATCH 05/10] target/i386/tcg: Introduce x86_mmu_index_{kernel_,}pl, Paolo Bonzini, 2024/07/10
- [PATCH 06/10] target/i386/tcg: Compute MMU index once, Paolo Bonzini, 2024/07/10
- [PATCH 07/10] target/i386/tcg: Use DPL-level accesses for interrupts and call gates, Paolo Bonzini, 2024/07/10
- [PATCH 08/10] target/i386/tcg: check for correct busy state before switching to a new task, Paolo Bonzini, 2024/07/10
- [PATCH 09/10] target/i386/tcg: use X86Access for TSS access, Paolo Bonzini, 2024/07/10
[PATCH 10/10] target/i386/tcg: save current task state before loading new one, Paolo Bonzini, 2024/07/10
Re: [PATCH 00/10] target/i386/tcg: fixes for seg_helper.c, Robert Henry, 2024/07/10