[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH] hmat acpi: Don't require initiator value in -numa when hmat=
From: |
Igor Mammedov |
Subject: |
Re: [PATCH] hmat acpi: Don't require initiator value in -numa when hmat=on |
Date: |
Mon, 20 Jun 2022 15:27:57 +0200 |
On Wed, 6 Apr 2022 14:29:56 +0200
Brice Goglin <Brice.Goglin@inria.fr> wrote:
> From: Brice Goglin <Brice.Goglin@inria.fr>
>
> The "Memory Proximity Domain Attributes" structure of the ACPI HMAT
> has a "Processor Proximity Domain Valid" flag that is currently
> always set because Qemu -numa requires initiator=X when hmat=on.
>
> Unsetting this flag allows to create more complex memory topologies
> by having multiple best initiators for a single memory target.
>
> This patch allows -numa with initiator=X when hmat=on by keeping
> the default value MAX_NODES in numa_state->nodes[i].initiator.
> All places reading numa_state->nodes[i].initiator already check
> whether it's different from MAX_NODES before using it. And
> hmat_build_table_structs() already unset the Valid flag when needed.
>
> Tested with
> qemu-system-x86_64 -accel kvm \
> -machine pc,hmat=on \
> -drive if=pflash,format=raw,file=./OVMF.fd \
> -drive media=disk,format=qcow2,file=efi.qcow2 \
> -smp 4 \
> -m 3G \
> -object memory-backend-ram,size=1G,id=ram0 \
> -object memory-backend-ram,size=1G,id=ram1 \
> -object memory-backend-ram,size=1G,id=ram2 \
> -numa node,nodeid=0,memdev=ram0,cpus=0-1 \
> -numa node,nodeid=1,memdev=ram1,cpus=2-3 \
> -numa node,nodeid=2,memdev=ram2 \
> -numa
> hmat-lb,initiator=0,target=0,hierarchy=memory,data-type=access-latency,latency=10
> \
> -numa
> hmat-lb,initiator=0,target=0,hierarchy=memory,data-type=access-bandwidth,bandwidth=10485760
> \
> -numa
> hmat-lb,initiator=0,target=1,hierarchy=memory,data-type=access-latency,latency=20
> \
> -numa
> hmat-lb,initiator=0,target=1,hierarchy=memory,data-type=access-bandwidth,bandwidth=5242880
> \
> -numa
> hmat-lb,initiator=0,target=2,hierarchy=memory,data-type=access-latency,latency=30
> \
> -numa
> hmat-lb,initiator=0,target=2,hierarchy=memory,data-type=access-bandwidth,bandwidth=1048576
> \
> -numa
> hmat-lb,initiator=1,target=0,hierarchy=memory,data-type=access-latency,latency=20
> \
> -numa
> hmat-lb,initiator=1,target=0,hierarchy=memory,data-type=access-bandwidth,bandwidth=5242880
> \
> -numa
> hmat-lb,initiator=1,target=1,hierarchy=memory,data-type=access-latency,latency=10
> \
> -numa
> hmat-lb,initiator=1,target=1,hierarchy=memory,data-type=access-bandwidth,bandwidth=10485760
> \
> -numa
> hmat-lb,initiator=1,target=2,hierarchy=memory,data-type=access-latency,latency=30
> \
> -numa
> hmat-lb,initiator=1,target=2,hierarchy=memory,data-type=access-bandwidth,bandwidth=1048576
> \
>
> This exposes NUMA node2 at same distance from both node0 and node1 as seen in
> lstopo:
>
> Machine (2966MB total) + Package P#0
> NUMANode P#2 (979MB)
> Group0
> NUMANode P#0 (980MB)
> Core P#0 + PU P#0
> Core P#1 + PU P#1
> Group0
> NUMANode P#1 (1007MB)
> Core P#2 + PU P#2
> Core P#3 + PU P#3
here should be a dis-assembled dump of generated HMAT table
+ a test case, see tests/qtest/bios-tables-test.c
for the process (at tho top) and test examples
>
> Signed-off-by: Brice Goglin <Brice.Goglin@inria.fr>
> ---
> hw/core/machine.c | 4 +---
> 1 file changed, 1 insertion(+), 3 deletions(-)
>
> diff --git a/hw/core/machine.c b/hw/core/machine.c
> index d856485cb4..9884ef7ac6 100644
> --- a/hw/core/machine.c
> +++ b/hw/core/machine.c
> @@ -1012,9 +1012,7 @@ static void numa_validate_initiator(NumaState
> *numa_state)
>
> for (i = 0; i < numa_state->num_nodes; i++) {
> if (numa_info[i].initiator == MAX_NODES) {
> - error_report("The initiator of NUMA node %d is missing, use "
> - "'-numa node,initiator' option to declare it", i);
> - exit(1);
> + continue;
> }
>
> if (!numa_info[numa_info[i].initiator].present) {