[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v2 26/66] target/arm: Fold secure and non-secure a-profile mm
From: |
Peter Maydell |
Subject: |
Re: [PATCH v2 26/66] target/arm: Fold secure and non-secure a-profile mmu indexes |
Date: |
Tue, 20 Sep 2022 16:44:27 +0100 |
On Mon, 22 Aug 2022 at 18:13, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> For a-profile, which does not bank system registers,
32-bit A-profile has a pile of banked system registers...
> it takes
> quite a lot of code to switch between security states. In the
> process, registers such as TCR_EL{1,2} must be swapped, which in
> itself requires the flushing of softmmu tlbs. Therefore it
> doesn't buy us anything to separate tlbs by security state.
>
> Retain the distinction between Stage2 and Stage2_S.
>
> This will be important as we implement FEAT_RME, and do not
> wish to add a third set of mmu indexes for Realm state.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> target/arm/cpu-param.h | 2 +-
> target/arm/cpu.h | 69 +++++++-----------
> target/arm/internals.h | 31 +-------
> target/arm/helper.c | 144 +++++++++++++------------------------
> target/arm/ptw.c | 25 ++-----
> target/arm/translate-a64.c | 8 ---
> target/arm/translate.c | 6 +-
> 7 files changed, 83 insertions(+), 202 deletions(-)
>
> diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
> index 68ffb12427..08681828ac 100644
> --- a/target/arm/cpu-param.h
> +++ b/target/arm/cpu-param.h
> @@ -32,6 +32,6 @@
> # define TARGET_PAGE_BITS_MIN 10
> #endif
>
> -#define NB_MMU_MODES 15
> +#define NB_MMU_MODES 8
>
> #endif
> diff --git a/target/arm/cpu.h b/target/arm/cpu.h
> index ee94d8e653..cea2121f67 100644
> --- a/target/arm/cpu.h
> +++ b/target/arm/cpu.h
> @@ -2873,26 +2873,26 @@ bool write_cpustate_to_list(ARMCPU *cpu, bool
> kvm_sync);
> * table over and over.
> * 6. we need separate EL1/EL2 mmu_idx for handling the Privileged Access
> * Never (PAN) bit within PSTATE.
> + * 7. we fold together the secure and non-secure regimes for A-profile,
> + * because there are no banked system registers,
ditto
> so the process of
> + * switching between secure and non-secure is already heavyweight.
> *
> * This gives us the following list of cases:
> *
> - * NS EL0 EL1&0 stage 1+2 (aka NS PL0)
> - * NS EL1 EL1&0 stage 1+2 (aka NS PL1)
> - * NS EL1 EL1&0 stage 1+2 +PAN
> - * NS EL0 EL2&0
> - * NS EL2 EL2&0
> - * NS EL2 EL2&0 +PAN
> - * NS EL2 (aka NS PL2)
> - * S EL0 EL1&0 (aka S PL0)
> - * S EL1 EL1&0 (not used if EL3 is 32 bit)
> - * S EL1 EL1&0 +PAN
> - * S EL3 (aka S PL1)
> + * EL0 EL1&0 stage 1+2 (aka NS PL0)
> + * EL1 EL1&0 stage 1+2 (aka NS PL1)
> + * EL1 EL1&0 stage 1+2 +PAN
> + * EL0 EL2&0
> + * EL2 EL2&0
> + * EL2 EL2&0 +PAN
> + * EL2 (aka NS PL2)
> + * EL3 (aka S PL1)
> *
> * for a total of 11 different mmu_idx.
This number needs updating :-)
> *
> * R profile CPUs have an MPU, but can use the same set of MMU indexes
> - * as A profile. They only need to distinguish NS EL0 and NS EL1 (and
> - * NS EL2 if we ever model a Cortex-R52).
> + * as A profile. They only need to distinguish EL0 and EL1 (and
> + * EL2 if we ever model a Cortex-R52).
> *
> diff --git a/target/arm/helper.c b/target/arm/helper.c
> index 09990ca096..b9f1a3d826 100644
> --- a/target/arm/helper.c
> +++ b/target/arm/helper.c
> @@ -1671,6 +1671,7 @@ static void scr_write(CPUARMState *env, const
> ARMCPRegInfo *ri, uint64_t value)
> /* Begin with base v8.0 state. */
> uint32_t valid_mask = 0x3fff;
> ARMCPU *cpu = env_archcpu(env);
> + uint64_t changed;
>
> /*
> * Because SCR_EL3 is the "real" cpreg and SCR is the alias, reset always
> @@ -1730,7 +1731,22 @@ static void scr_write(CPUARMState *env, const
> ARMCPRegInfo *ri, uint64_t value)
>
> /* Clear all-context RES0 bits. */
> value &= valid_mask;
> - raw_write(env, ri, value);
> + changed = env->cp15.scr_el3 ^ value;
> + env->cp15.scr_el3 = value;
> +
> + /*
> + * If SCR_EL3.NS changes, i.e. arm_is_secure_below_el3, then
> + * we must invalidate all TLBs below EL3.
> + */
> + if (changed & SCR_NS) {
> + tlb_flush_by_mmuidx(env_cpu(env), (ARMMMUIdxBit_E10_0 |
> + ARMMMUIdxBit_E20_0 |
> + ARMMMUIdxBit_E10_1 |
> + ARMMMUIdxBit_E20_2 |
> + ARMMMUIdxBit_E10_1_PAN |
> + ARMMMUIdxBit_E20_2_PAN |
> + ARMMMUIdxBit_E2));
In theory I suppose for a CPU without Secure EL2 support we could avoid
flushing the EL2 TLBs, but it's probably not worth worrying about.
> + }
> }
Other than the comment/commit message nits,
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
[Prev in Thread] |
Current Thread |
[Next in Thread] |
- Re: [PATCH v2 26/66] target/arm: Fold secure and non-secure a-profile mmu indexes,
Peter Maydell <=