[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PULL 28/30] spapr, spapr_numa: handle vcpu ibm,associativity
From: |
David Gibson |
Subject: |
[PULL 28/30] spapr, spapr_numa: handle vcpu ibm,associativity |
Date: |
Fri, 4 Sep 2020 13:47:17 +1000 |
From: Daniel Henrique Barboza <danielhb413@gmail.com>
Vcpus have an additional paramenter to be appended, vcpu_id. This
also changes the size of the of property itself, which is being
represented in index 0 of numa_assoc_array[cpu->node_id],
and defaults to MAX_DISTANCE_REF_POINTS for all cases but
vcpus.
All this logic makes more sense in spapr_numa.c, where we handle
everything NUMA and associativity. A new helper spapr_numa_fixup_cpu_dt()
was added, and spapr.c uses it the same way as it was using the former
spapr_fixup_cpu_numa_dt().
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20200903220639.563090-3-danielhb413@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
hw/ppc/spapr.c | 17 +----------------
hw/ppc/spapr_numa.c | 27 +++++++++++++++++++++++++++
include/hw/ppc/spapr_numa.h | 2 ++
3 files changed, 30 insertions(+), 16 deletions(-)
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 1ad6f59863..badfa86319 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -202,21 +202,6 @@ static int spapr_fixup_cpu_smt_dt(void *fdt, int offset,
PowerPCCPU *cpu,
return ret;
}
-static int spapr_fixup_cpu_numa_dt(void *fdt, int offset, PowerPCCPU *cpu)
-{
- int index = spapr_get_vcpu_id(cpu);
- uint32_t associativity[] = {cpu_to_be32(0x5),
- cpu_to_be32(0x0),
- cpu_to_be32(0x0),
- cpu_to_be32(0x0),
- cpu_to_be32(cpu->node_id),
- cpu_to_be32(index)};
-
- /* Advertise NUMA via ibm,associativity */
- return fdt_setprop(fdt, offset, "ibm,associativity", associativity,
- sizeof(associativity));
-}
-
static void spapr_dt_pa_features(SpaprMachineState *spapr,
PowerPCCPU *cpu,
void *fdt, int offset)
@@ -785,7 +770,7 @@ static void spapr_dt_cpu(CPUState *cs, void *fdt, int
offset,
pft_size_prop, sizeof(pft_size_prop))));
if (ms->numa_state->num_nodes > 1) {
- _FDT(spapr_fixup_cpu_numa_dt(fdt, offset, cpu));
+ _FDT(spapr_numa_fixup_cpu_dt(spapr, fdt, offset, cpu));
}
_FDT(spapr_fixup_cpu_smt_dt(fdt, offset, cpu, compat_smt));
diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
index f6b6fe648f..1a1ec8bcff 100644
--- a/hw/ppc/spapr_numa.c
+++ b/hw/ppc/spapr_numa.c
@@ -45,6 +45,33 @@ void spapr_numa_write_associativity_dt(SpaprMachineState
*spapr, void *fdt,
sizeof(spapr->numa_assoc_array[nodeid]))));
}
+int spapr_numa_fixup_cpu_dt(SpaprMachineState *spapr, void *fdt,
+ int offset, PowerPCCPU *cpu)
+{
+ uint vcpu_assoc_size = NUMA_ASSOC_SIZE + 1;
+ uint32_t vcpu_assoc[vcpu_assoc_size];
+ int index = spapr_get_vcpu_id(cpu);
+ int i;
+
+ /*
+ * VCPUs have an extra 'cpu_id' value in ibm,associativity
+ * compared to other resources. Increment the size at index
+ * 0, copy all associativity domains already set, then put
+ * cpu_id last.
+ */
+ vcpu_assoc[0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS + 1);
+
+ for (i = 1; i <= MAX_DISTANCE_REF_POINTS; i++) {
+ vcpu_assoc[i] = spapr->numa_assoc_array[cpu->node_id][i];
+ }
+
+ vcpu_assoc[vcpu_assoc_size - 1] = cpu_to_be32(index);
+
+ /* Advertise NUMA via ibm,associativity */
+ return fdt_setprop(fdt, offset, "ibm,associativity",
+ vcpu_assoc, sizeof(vcpu_assoc));
+}
+
/*
* Helper that writes ibm,associativity-reference-points and
* max-associativity-domains in the RTAS pointed by @rtas
diff --git a/include/hw/ppc/spapr_numa.h b/include/hw/ppc/spapr_numa.h
index a2a4df55f7..43c6a16fe3 100644
--- a/include/hw/ppc/spapr_numa.h
+++ b/include/hw/ppc/spapr_numa.h
@@ -27,5 +27,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
void spapr_numa_write_rtas_dt(SpaprMachineState *spapr, void *fdt, int rtas);
void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
int offset, int nodeid);
+int spapr_numa_fixup_cpu_dt(SpaprMachineState *spapr, void *fdt,
+ int offset, PowerPCCPU *cpu);
#endif /* HW_SPAPR_NUMA_H */
--
2.26.2
- [PULL 16/30] target/arm: Move setting of CPU halted state to generic code, (continued)
- [PULL 16/30] target/arm: Move setting of CPU halted state to generic code, David Gibson, 2020/09/03
- [PULL 21/30] sparc/sun4m: Use start-powered-off CPUState property, David Gibson, 2020/09/03
- [PULL 20/30] sparc/sun4m: Don't set cs->halted = 0 in main_cpu_reset(), David Gibson, 2020/09/03
- [PULL 22/30] target/s390x: Use start-powered-off CPUState property, David Gibson, 2020/09/03
- [PULL 23/30] hw/ppc/ppc4xx_pci: Use ARRAY_SIZE() instead of magic value, David Gibson, 2020/09/03
- [PULL 24/30] hw/ppc/ppc4xx_pci: Replace pointless warning by assert(), David Gibson, 2020/09/03
- [PULL 25/30] ppc: introducing spapr_numa.c NUMA code helper, David Gibson, 2020/09/03
- [PULL 26/30] ppc/spapr_nvdimm: turn spapr_dt_nvdimm() static, David Gibson, 2020/09/03
- [PULL 30/30] spapr_numa: move NVLink2 associativity handling to spapr_numa.c, David Gibson, 2020/09/03
- [PULL 29/30] spapr, spapr_numa: move lookup-arrays handling to spapr_numa.c, David Gibson, 2020/09/03
- [PULL 28/30] spapr, spapr_numa: handle vcpu ibm,associativity,
David Gibson <=
- [PULL 27/30] spapr: introduce SpaprMachineState::numa_assoc_array, David Gibson, 2020/09/03
- Re: [PULL 00/30] ppc-for-5.2 queue 20200904, Peter Maydell, 2020/09/06
- Re: [PULL 00/30] ppc-for-5.2 queue 20200904, David Gibson, 2020/09/06
- Re: [PULL 00/30] ppc-for-5.2 queue 20200904, Laurent Vivier, 2020/09/07
- Re: [PULL 00/30] ppc-for-5.2 queue 20200904, Philippe Mathieu-Daudé, 2020/09/07
- Re: [PULL 00/30] ppc-for-5.2 queue 20200904, Laurent Vivier, 2020/09/07
- Re: [PULL 00/30] ppc-for-5.2 queue 20200904, Cornelia Huck, 2020/09/07
- Re: [PULL 00/30] ppc-for-5.2 queue 20200904, Laurent Vivier, 2020/09/07
- Re: [PULL 00/30] ppc-for-5.2 queue 20200904, Laurent Vivier, 2020/09/07
- Re: [PULL 00/30] ppc-for-5.2 queue 20200904, Philippe Mathieu-Daudé, 2020/09/07