qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/3] spapr_numa.c: create spapr_numa_initial_nvgpu_NUMA_id()


From: Greg Kurz
Subject: Re: [PATCH 2/3] spapr_numa.c: create spapr_numa_initial_nvgpu_NUMA_id() helper
Date: Thu, 28 Jan 2021 16:50:15 +0100

On Thu, 28 Jan 2021 12:17:30 -0300
Daniel Henrique Barboza <danielhb413@gmail.com> wrote:

> We'll need to check the initial value given to spapr->gpu_numa_id when
> building the rtas DT, so put it in a helper for easier access and to
> avoid repetition.
> 
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> ---
>  hw/ppc/spapr.c              | 12 ++----------
>  hw/ppc/spapr_numa.c         | 14 ++++++++++++++
>  include/hw/ppc/spapr_numa.h |  1 +
>  3 files changed, 17 insertions(+), 10 deletions(-)
> 
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index 2d60c6f594..c2b74cbfdf 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -2765,16 +2765,8 @@ static void spapr_machine_init(MachineState *machine)
>  
>      }
>  
> -    /*
> -     * NVLink2-connected GPU RAM needs to be placed on a separate NUMA node.
> -     * We assign a new numa ID per GPU in spapr_pci_collect_nvgpu() which is
> -     * called from vPHB reset handler so we initialize the counter here.
> -     * If no NUMA is configured from the QEMU side, we start from 1 as GPU 
> RAM
> -     * must be equally distant from any other node.
> -     * The final value of spapr->gpu_numa_id is going to be written to
> -     * max-associativity-domains in spapr_build_fdt().
> -     */
> -    spapr->gpu_numa_id = MAX(1, machine->numa_state->num_nodes);
> +    /* Init gpu_numa_id */

Code is trivial enough you don't really need to paraphrase with a
comment.

> +    spapr->gpu_numa_id = spapr_numa_initial_nvgpu_NUMA_id(machine);
>  

This _NUMA_ looks a bit aggressive and not especially informative
to me. Maybe just make it spapr_numa_initial_nvgpu_id() ?

With these fixed,

Reviewed-by: Greg Kurz <groug@kaod.org>

>      /* Init numa_assoc_array */
>      spapr_numa_associativity_init(spapr, machine);
> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
> index 261810525b..f71105c783 100644
> --- a/hw/ppc/spapr_numa.c
> +++ b/hw/ppc/spapr_numa.c
> @@ -46,6 +46,20 @@ static bool spapr_numa_is_symmetrical(MachineState *ms)
>      return true;
>  }
>  
> +/*
> + * NVLink2-connected GPU RAM needs to be placed on a separate NUMA node.
> + * We assign a new numa ID per GPU in spapr_pci_collect_nvgpu() which is
> + * called from vPHB reset handler so we initialize the counter here.
> + * If no NUMA is configured from the QEMU side, we start from 1 as GPU RAM
> + * must be equally distant from any other node.
> + * The final value of spapr->gpu_numa_id is going to be written to
> + * max-associativity-domains in spapr_build_fdt().
> + */
> +unsigned int spapr_numa_initial_nvgpu_NUMA_id(MachineState *machine)
> +{
> +    return MAX(1, machine->numa_state->num_nodes);
> +}
> +
>  /*
>   * This function will translate the user distances into
>   * what the kernel understand as possible values: 10
> diff --git a/include/hw/ppc/spapr_numa.h b/include/hw/ppc/spapr_numa.h
> index b3fd950634..6655bcf281 100644
> --- a/include/hw/ppc/spapr_numa.h
> +++ b/include/hw/ppc/spapr_numa.h
> @@ -31,5 +31,6 @@ int spapr_numa_fixup_cpu_dt(SpaprMachineState *spapr, void 
> *fdt,
>                              int offset, PowerPCCPU *cpu);
>  int spapr_numa_write_assoc_lookup_arrays(SpaprMachineState *spapr, void *fdt,
>                                           int offset);
> +unsigned int spapr_numa_initial_nvgpu_NUMA_id(MachineState *machine);
>  
>  #endif /* HW_SPAPR_NUMA_H */




reply via email to

[Prev in Thread] Current Thread [Next in Thread]