On 7/9/24 23:29, Paolo Bonzini wrote:
> This takes care of probing the vaddr range in advance, and is also faster
> because it avoids repeated TLB lookups. It also matches the Intel manual
> better, as it says "Checks that the current (old) TSS, new TSS, and all
> segment descriptors used in the task switch are paged into system memory";
> note however that it's not clear how the processor checks for segment
> descriptors, and this check is not included in the AMD manual.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> target/i386/tcg/seg_helper.c | 101 ++++++++++++++++++-----------------
> 1 file changed, 51 insertions(+), 50 deletions(-)
>
> diff --git a/target/i386/tcg/seg_helper.c b/target/i386/tcg/seg_helper.c
> index 25af9d4a4ec..77f2c65c3cf 100644
> --- a/target/i386/tcg/seg_helper.c
> +++ b/target/i386/tcg/seg_helper.c
> @@ -311,35 +313,44 @@ static int switch_tss_ra(CPUX86State *env, int tss_selector,
> raise_exception_err_ra(env, EXCP0A_TSS, tss_selector & 0xfffc, retaddr);
> }
>
> + /* X86Access avoids memory exceptions during the task switch */
> + access_prepare_mmu(&old, env, env->tr.base, old_tss_limit_max,
> + MMU_DATA_STORE, cpu_mmu_index_kernel(env), retaddr);
> +
> + if (source == SWITCH_TSS_CALL) {
> + /* Probe for future write of parent task */
> + probe_access(env, tss_base, 2, MMU_DATA_STORE,
> + cpu_mmu_index_kernel(env), retaddr);
> + }
> + access_prepare_mmu(&new, env, tss_base, tss_limit,
> + MMU_DATA_LOAD, cpu_mmu_index_kernel(env), retaddr);
You're computing cpu_mmu_index_kernel 3 times.
Oh, indeed. Better than 30. :)
This appears to be conservative in that you're requiring only 2 bytes (a minimum) of 0x68
to be writable. Is it legal to place the TSS at offset 0xffe of page 0, with the balance
on page 1, with page 0 writable and page 1 read-only?
Yes, paging is totally optional here. The only field that is written is the link.
Otherwise I would think you could
just check the entire TSS for writability.
Anyway, after the MMU_DATA_STORE probe, you have proved that 'X86Access new' contains an
address range that may be stored. So you can change the SWITCH_TSS_CALL store below to
access_stw() too.
Nice.
> - /* NOTE: we must avoid memory exceptions during the task switch,
> - so we make dummy accesses before */
> - /* XXX: it can still fail in some cases, so a bigger hack is
> - necessary to valid the TLB after having done the accesses */
> -
> - v1 = cpu_ldub_kernel_ra(env, env->tr.base, retaddr);
> - v2 = cpu_ldub_kernel_ra(env, env->tr.base + old_tss_limit_max, retaddr);
> - cpu_stb_kernel_ra(env, env->tr.base, v1, retaddr);
> - cpu_stb_kernel_ra(env, env->tr.base + old_tss_limit_max, v2, retaddr);
OMG.
Haha, yeah X86Access is perfect here.
Paolo
Looks like a fantastic cleanup overall.
r~