qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device


From: Igor Mammedov
Subject: Re: [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device
Date: Thu, 10 Dec 2015 13:35:05 +0100

On Thu, 10 Dec 2015 11:45:35 +0530
Bharata B Rao <address@hidden> wrote:

> Hi,
> 
> This is an attempt to define a generic CPU device that serves as a
> containing device to underlying arch-specific CPU devices. The
> motivation for this is to have an arch-neutral way to specify CPUs
> mainly during hotplug.
> 
> Instead of individual archs having their own semantics to specify the
> CPU like
> 
> -device POWER8-powerpc64-cpu (pseries)
> -device qemu64-x86_64-cpu (pc)
> -device s390-cpu (s390)
> 
> this patch introduces a new device named cpu-core that could be
> used for all target archs as
> 
> -device cpu-core,socket="sid"
> 
> This adds a CPU core with all its associated threads into the
> specified socket with id "sid". The number of target architecture
> specific CPU threads that get created during this operation is based
> on the CPU topology specified using -smp sockets=S,cores=C,threads=T
> option. Also the number of cores that can be accommodated in the same
> socket is dictated by the cores= parameter in the same -smp option.
> 
> CPU sockets are represented by QOM objects and the number of sockets
> required to fit in max_cpus are created at boottime. As cpu-core
> devices are created, they are linked to socket object specified by
> socket="sid" device property.
> 
> Thus the model consists of backend socket objects which can be
> considered as container of one or more cpu-core devices. Each
> cpu-core object is linked to the appropriate backend socket object.
> Each CPU thread device appears as child object of cpu-core device.
> 
> All the required socket objects are created upfront and they can't be
> deleted. Though currently socket objects can be created using
> object_add monitor command, I am planning to prevent that so that a
> guest boots with the required number of sockets and only CPU cores
> can be hotplugged into them.
> 
> CPU hotplug granularity
> -----------------------
> CPU hotplug will now be done in cpu-core device granularity.
> 
> This patchset includes a patch to prevent topologies that result in
> partially filled cores. Hence with this patchset, we will always
> have fully filled cpu-core devices both for boot time and during
> hotplug.
> 
> For archs like PowerPC, where there is no requirement to be fully
> similar to the physical system, hotplugging CPU at core granularity
> is common. While core level hotplug will fit in naturally for such
> archs, for others which want socket level hotplug, could higher level
> tools like libvirt perform multiple core hotplugs in response to one
> socket hotplug request ?
> 
> Are there archs that would need thread level CPU addition ?
there are,
currently x86 target allows to start QEMU with 1 thread even if
topology specifies more threads per core. The same applies to hotplug.

On top of that I think ACPI spec also treats CPU devices on per threads
level.

> 
> Boot time CPUs as cpu-core devices
> ----------------------------------
> In this patchset, I am coverting the boot time CPU initialization
> (from -smp option) to initialize the required number of cpu-core
> devices and linking them with the appropriate socket objects.
> 
> Initially I thought we should be able to completely replace -smp with
> -device cpu-core, but then I realized that at least both x86 and
> pseries guests' machine init code has dependencies on first CPU being
> available for the machine init code to work correctly.
> 
> Currently I have converted boot CPUs to cpu-core devices only PowerPC
> sPAPR and i386 PC targets. I am not really sure about the i386
> changes and the intention in this iteration was to check if it is
> indeed possible to fit i386 into cpu-core model. Having said that I
> am able to boot an x86 guest with this patchset.
> 
> NUMA
> ----
> TODO: In this patchset, I haven't explicitly done anything for NUMA
> yet. I am thinking if we could add node=N option to cpu-core device.
> That could specify the NUMA node to which the CPU core belongs to.
> 
> -device cpu-core,socket="sid",node=N
> 
> QOM composition tree
> ---------------------
> QOM composition tree for x86 where I don't have CPU hotplug enabled,
> but just initializing boot CPUs as cpu-core devices appears like this:
> 
> -smp 4,sockets=4,cores=2,threads=2,maxcpus=16
with this series it would regress following CLI
  -smp 1,sockets=4,cores=2,threads=2,maxcpus=16


wrt CLI can't we do something like this?

-device some-cpu-model,socket=x[,core=y[,thread=z]]

for NUMA configs individual sockets IDs could be bound to
nodes via -numa ... option

and allow individual targets to use its own way to build CPUs?

For initial conversion of x86-cpus to device-add we could do pretty
much the same like we do now, where cpu devices will appear under:
/machine (pc-i440fx-2.5-machine)
  /unattached (container)
    /device[x] (qemu64-x86_64-cpu)

since we don't have to maintain/model dummy socket/core objects.

PowerPC could do the similar only at core level since it has
need for modeling core objects.

It doesn't change anything wrt current introspection state, since
cpus could be still found by mgmt tools that parse QOM tree.

We probably should split 2 conflicting goals we are trying to meet here,

 1. make device-add/dell work with cpus /
     drop support for cpu-add in favor of device_add 

 2. how to model QOM tree view for CPUs in arch independent manner
    to make mgmt layer life easier.

and work on them independently instead of arguing for years,
that would allow us to make progress in #1 while still thinking about
how to do #2 the right way if we really need it.

> 
> /machine (pc-i440fx-2.5-machine)
>   /unattached (container)
>     /device[0] (cpu-core)
>       /thread[0] (qemu64-x86_64-cpu)
>       /thread[1] (qemu64-x86_64-cpu)
>     /device[4] (cpu-core)
>       /thread[0] (qemu64-x86_64-cpu)
>       /thread[1] (qemu64-x86_64-cpu)
> 
> For PowerPC where I have CPU hotplug enabled:
> 
> -smp 4,sockets=4,cores=2,threads=2,maxcpus=16 -device
> cpu-core,socket=cpu-socket1,id=core3
> 
> /machine (pseries-2.5-machine)
>   /unattached (container)
>     /device[1] (cpu-core)
>       /thread[0] (host-powerpc64-cpu)
>       /thread[1] (host-powerpc64-cpu)
>     /device[2] (cpu-core)
>       /thread[0] (host-powerpc64-cpu)
>       /thread[1] (host-powerpc64-cpu)
>   /peripheral (container)
>     /core3 (cpu-core)
>       /thread[0] (host-powerpc64-cpu)
>       /thread[1] (host-powerpc64-cpu)
> 
> As can be seen, the boot CPU and hotplugged CPU come under separate
> parents. Guess I should work towards getting both boot time and
> hotplugged CPUs under same parent ?
> 
> Socket ID generation
> ---------------------
> In the current approach the socket ID generation is implicit somewhat.
> All the sockets objects are created with pre-fixed format for ids like
> cpu-socket0, cpu-socket1 etc. And machine init code of each arch is
> expected to use the same when creating cpu-core devices to link the
> core to the right object. Even user needs to know these IDs during
> device_add time. May be I could add "info cpu-sockets" which gives
> information about all the existing sockets and their core-occupancy
> status.
> 
> Finally, I understand that this is a simplistic model and it wouldn't
> probably support all the notions around CPU topology and hotplug that
> we would like to support for all archs. The intention of this RFC is
> to start with somewhere and seek inputs from the community.
> 
> Bharata B Rao (9):
>   vl: Don't allow CPU toplogies with partially filled cores
>   cpu: Store CPU typename in MachineState
>   cpu: Don't realize CPU from cpu_generic_init()
>   cpu: CPU socket backend
>   vl: Create CPU socket backend objects
>   cpu: Introduce CPU core device
>   spapr: Convert boot CPUs into CPU core device initialization
>   target-i386: Set apic_id during CPU initfn
>   pc: Convert boot CPUs into CPU core device initialization
> 
>  hw/cpu/Makefile.objs        |  1 +
>  hw/cpu/core.c               | 98
> +++++++++++++++++++++++++++++++++++++++++++++
> hw/cpu/socket.c             | 48 ++++++++++++++++++++++
> hw/i386/pc.c                | 64 +++++++++--------------------
> hw/ppc/spapr.c              | 32 ++++++++++-----
> include/hw/boards.h         |  1 + include/hw/cpu/core.h       | 28
> +++++++++++++ include/hw/cpu/socket.h     | 26 ++++++++++++
>  qom/cpu.c                   |  6 ---
>  target-arm/helper.c         | 16 +++++++-
>  target-cris/cpu.c           | 16 +++++++-
>  target-i386/cpu.c           | 37 ++++++++++++++++-
>  target-i386/cpu.h           |  1 +
>  target-lm32/helper.c        | 16 +++++++-
>  target-moxie/cpu.c          | 16 +++++++-
>  target-openrisc/cpu.c       | 16 +++++++-
>  target-ppc/translate_init.c | 16 +++++++-
>  target-sh4/cpu.c            | 16 +++++++-
>  target-tricore/helper.c     | 16 +++++++-
>  target-unicore32/helper.c   | 16 +++++++-
>  vl.c                        | 26 ++++++++++++
>  21 files changed, 439 insertions(+), 73 deletions(-)
>  create mode 100644 hw/cpu/core.c
>  create mode 100644 hw/cpu/socket.c
>  create mode 100644 include/hw/cpu/core.h
>  create mode 100644 include/hw/cpu/socket.h
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]