qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH RFC 10/16] hw/ppc/spapr: don't use smp_cores, sm


From: Andrew Jones
Subject: Re: [Qemu-devel] [PATCH RFC 10/16] hw/ppc/spapr: don't use smp_cores, smp_threads
Date: Tue, 14 Jun 2016 08:23:08 +0200
User-agent: Mutt/1.5.23.1 (2014-03-12)

On Tue, Jun 14, 2016 at 01:03:41PM +1000, David Gibson wrote:
> On Fri, Jun 10, 2016 at 07:40:21PM +0200, Andrew Jones wrote:
> > Use CPUState nr_cores,nr_threads and MachineState
> > cores,threads instead.
> > 
> > Signed-off-by: Andrew Jones <address@hidden>
> > ---
> >  hw/ppc/spapr.c      | 9 +++++----
> >  hw/ppc/spapr_rtas.c | 2 +-
> >  2 files changed, 6 insertions(+), 5 deletions(-)
> > 
> > diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> > index 063664234106e..f78276bb4b164 100644
> > --- a/hw/ppc/spapr.c
> > +++ b/hw/ppc/spapr.c
> > @@ -35,7 +35,6 @@
> >  #include "net/net.h"
> >  #include "sysemu/device_tree.h"
> >  #include "sysemu/block-backend.h"
> > -#include "sysemu/cpus.h"
> >  #include "sysemu/kvm.h"
> >  #include "sysemu/device_tree.h"
> >  #include "kvm_ppc.h"
> > @@ -603,7 +602,7 @@ static void spapr_populate_cpu_dt(CPUState *cs, void 
> > *fdt, int offset,
> >      uint32_t cpufreq = kvm_enabled() ? kvmppc_get_clockfreq() : 1000000000;
> >      uint32_t page_sizes_prop[64];
> >      size_t page_sizes_prop_size;
> > -    uint32_t vcpus_per_socket = smp_threads * smp_cores;
> > +    uint32_t vcpus_per_socket = cs->nr_cores * cs->nr_threads;
> >      uint32_t pft_size_prop[] = {0, cpu_to_be32(spapr->htab_shift)};
> >  
> >      /* Note: we keep CI large pages off for now because a 64K capable guest
> > @@ -1774,7 +1773,7 @@ static void ppc_spapr_init(MachineState *machine)
> >      /* Set up Interrupt Controller before we create the VCPUs */
> >      spapr->icp = xics_system_init(machine,
> >                                    DIV_ROUND_UP(max_cpus * 
> > kvmppc_smt_threads(),
> > -                                               smp_threads),
> > +                                               machine->threads),
> >                                    XICS_IRQS, &error_fatal);
> >  
> >      if (smc->dr_lmb_enabled) {
> > @@ -2268,9 +2267,11 @@ static HotplugHandler 
> > *spapr_get_hotpug_handler(MachineState *machine,
> >  
> >  static unsigned spapr_cpu_index_to_socket_id(unsigned cpu_index)
> >  {
> > +    CPUState *cs = first_cpu;
> > +
> >      /* Allocate to NUMA nodes on a "socket" basis (not that concept of
> >       * socket means much for the paravirtualized PAPR platform) */
> > -    return cpu_index / smp_threads / smp_cores;
> > +    return cpu_index / cs->nr_cores / cs->nr_threads;
> >  }
> >  
> >  static void spapr_machine_class_init(ObjectClass *oc, void *data)
> > diff --git a/hw/ppc/spapr_rtas.c b/hw/ppc/spapr_rtas.c
> > index 43e2c684fda8d..3fdfbb01a20dd 100644
> > --- a/hw/ppc/spapr_rtas.c
> > +++ b/hw/ppc/spapr_rtas.c
> > @@ -742,7 +742,7 @@ int spapr_rtas_device_tree_setup(void *fdt, hwaddr 
> > rtas_addr,
> >      lrdr_capacity[1] = cpu_to_be32(max_hotplug_addr & 0xffffffff);
> >      lrdr_capacity[2] = 0;
> >      lrdr_capacity[3] = cpu_to_be32(SPAPR_MEMORY_BLOCK_SIZE);
> > -    lrdr_capacity[4] = cpu_to_be32(max_cpus/smp_threads);
> > +    lrdr_capacity[4] = cpu_to_be32(max_cpus / machine->threads);
> >      ret = qemu_fdt_setprop(fdt, "/rtas", "ibm,lrdr-capacity", 
> > lrdr_capacity,
> >                       sizeof(lrdr_capacity));
> >      if (ret < 0) {
> 
> I think all the places that use cs->nr_* here it actually makes more
> sense to use the value in the machine state.

I think I used machine state whenever I (easily) could. How do I get to
machine state from a CPU method? I will if I can, for all machines,
and then gladly kill the CPUState->nr_cores/nr_threads.

Thanks,
drew

> 
> -- 
> David Gibson                  | I'll have my music baroque, and my code
> david AT gibson.dropbear.id.au        | minimalist, thank you.  NOT _the_ 
> _other_
>                               | _way_ _around_!
> http://www.ozlabs.org/~dgibson





reply via email to

[Prev in Thread] Current Thread [Next in Thread]