[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PATCH v5 11/18] target/ppc: Streamline construction of VRMA SLB entry
From: |
David Gibson |
Subject: |
[PATCH v5 11/18] target/ppc: Streamline construction of VRMA SLB entry |
Date: |
Thu, 20 Feb 2020 14:23:09 +1100 |
When in VRMA mode (i.e. a guest thinks it has the MMU off, but the
hypervisor is still applying translation) we use a special SLB entry,
rather than looking up an SLBE by address as we do when guest translation
is on.
We build that special entry in ppc_hash64_update_vrma() along with some
logic for handling some non-VRMA cases. Split the actual build of the
VRMA SLBE into a separate helper and streamline it a bit.
Signed-off-by: David Gibson <address@hidden>
---
target/ppc/mmu-hash64.c | 74 +++++++++++++++++++----------------------
1 file changed, 34 insertions(+), 40 deletions(-)
diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
index 203a41cca1..ac21c14f68 100644
--- a/target/ppc/mmu-hash64.c
+++ b/target/ppc/mmu-hash64.c
@@ -791,6 +791,35 @@ static target_ulong rmls_limit(PowerPCCPU *cpu)
}
}
+static int build_vrma_slbe(PowerPCCPU *cpu, ppc_slb_t *slb)
+{
+ CPUPPCState *env = &cpu->env;
+ target_ulong lpcr = env->spr[SPR_LPCR];
+ uint32_t vrmasd = (lpcr & LPCR_VRMASD) >> LPCR_VRMASD_SHIFT;
+ target_ulong vsid = SLB_VSID_VRMA | ((vrmasd << 4) & SLB_VSID_LLP_MASK);
+ int i;
+
+ for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
+ const PPCHash64SegmentPageSizes *sps = &cpu->hash64_opts->sps[i];
+
+ if (!sps->page_shift) {
+ break;
+ }
+
+ if ((vsid & SLB_VSID_LLP_MASK) == sps->slb_enc) {
+ slb->esid = SLB_ESID_V;
+ slb->vsid = vsid;
+ slb->sps = sps;
+ return 0;
+ }
+ }
+
+ error_report("Bad page size encoding in LPCR[VRMASD]; LPCR=0x"
+ TARGET_FMT_lx"\n", lpcr);
+
+ return -1;
+}
+
int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
int rwx, int mmu_idx)
{
@@ -1046,53 +1075,18 @@ void ppc_hash64_tlb_flush_hpte(PowerPCCPU *cpu,
target_ulong ptex,
static void ppc_hash64_update_vrma(PowerPCCPU *cpu)
{
CPUPPCState *env = &cpu->env;
- const PPCHash64SegmentPageSizes *sps = NULL;
- target_ulong esid, vsid, lpcr;
ppc_slb_t *slb = &env->vrma_slb;
- uint32_t vrmasd;
- int i;
-
- /* First clear it */
- slb->esid = slb->vsid = 0;
- slb->sps = NULL;
/* Is VRMA enabled ? */
if (ppc_hash64_use_vrma(env)) {
- return;
- }
-
- /*
- * Make one up. Mostly ignore the ESID which will not be needed
- * for translation
- */
- lpcr = env->spr[SPR_LPCR];
- vsid = SLB_VSID_VRMA;
- vrmasd = (lpcr & LPCR_VRMASD) >> LPCR_VRMASD_SHIFT;
- vsid |= (vrmasd << 4) & (SLB_VSID_L | SLB_VSID_LP);
- esid = SLB_ESID_V;
-
- for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
- const PPCHash64SegmentPageSizes *sps1 = &cpu->hash64_opts->sps[i];
-
- if (!sps1->page_shift) {
- break;
- }
-
- if ((vsid & SLB_VSID_LLP_MASK) == sps1->slb_enc) {
- sps = sps1;
- break;
+ if (build_vrma_slbe(cpu, slb) == 0) {
+ return;
}
}
- if (!sps) {
- error_report("Bad page size encoding esid 0x"TARGET_FMT_lx
- " vsid 0x"TARGET_FMT_lx, esid, vsid);
- return;
- }
-
- slb->vsid = vsid;
- slb->esid = esid;
- slb->sps = sps;
+ /* Otherwise, clear it to indicate error */
+ slb->esid = slb->vsid = 0;
+ slb->sps = NULL;
}
void ppc_store_lpcr(PowerPCCPU *cpu, target_ulong val)
--
2.24.1
- [PATCH v5 01/18] ppc: Remove stub support for 32-bit hypervisor mode, (continued)
- [PATCH v5 01/18] ppc: Remove stub support for 32-bit hypervisor mode, David Gibson, 2020/02/19
- [PATCH v5 02/18] ppc: Remove stub of PPC970 HID4 implementation, David Gibson, 2020/02/19
- [PATCH v5 06/18] target/ppc: Remove RMOR register from POWER9 & POWER10, David Gibson, 2020/02/19
- [PATCH v5 04/18] target/ppc: Introduce ppc_hash64_use_vrma() helper, David Gibson, 2020/02/19
- [PATCH v5 08/18] target/ppc: Streamline calculation of RMA limit from LPCR[RMLS], David Gibson, 2020/02/19
- [PATCH v5 10/18] target/ppc: Only calculate RMLS derived RMA limit on demand, David Gibson, 2020/02/19
- [PATCH v5 05/18] spapr, ppc: Remove VPM0/RMLS hacks for POWER9, David Gibson, 2020/02/19
- [PATCH v5 09/18] target/ppc: Correct RMLS table, David Gibson, 2020/02/19
- [PATCH v5 07/18] target/ppc: Use class fields to simplify LPCR masking, David Gibson, 2020/02/19
- [PATCH v5 12/18] target/ppc: Don't store VRMA SLBE persistently, David Gibson, 2020/02/19
- [PATCH v5 11/18] target/ppc: Streamline construction of VRMA SLB entry,
David Gibson <=
- [PATCH v5 16/18] spapr: Don't clamp RMA to 16GiB on new machine types, David Gibson, 2020/02/19
- [PATCH v5 17/18] spapr: Clean up RMA size calculation, David Gibson, 2020/02/19
- [PATCH v5 14/18] spapr,ppc: Simplify signature of kvmppc_rma_size(), David Gibson, 2020/02/19
- [PATCH v5 15/18] spapr: Don't attempt to clamp RMA to VRMA constraint, David Gibson, 2020/02/19
- [PATCH v5 13/18] spapr: Don't use weird units for MIN_RMA_SLOF, David Gibson, 2020/02/19
- [PATCH v5 03/18] target/ppc: Correct handling of real mode accesses with vhyp on hash MMU, David Gibson, 2020/02/19
- [PATCH v5 18/18] spapr: Fold spapr_node0_size() into its only caller, David Gibson, 2020/02/19