qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device


From: Bharata B Rao
Subject: Re: [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device
Date: Fri, 11 Dec 2015 09:27:57 +0530
User-agent: Mutt/1.5.23 (2014-03-12)

On Thu, Dec 10, 2015 at 01:35:05PM +0100, Igor Mammedov wrote:
> On Thu, 10 Dec 2015 11:45:35 +0530
> Bharata B Rao <address@hidden> wrote:
> 
> > Hi,
> > 
> > This is an attempt to define a generic CPU device that serves as a
> > containing device to underlying arch-specific CPU devices. The
> > motivation for this is to have an arch-neutral way to specify CPUs
> > mainly during hotplug.
> > 
> > Instead of individual archs having their own semantics to specify the
> > CPU like
> > 
> > -device POWER8-powerpc64-cpu (pseries)
> > -device qemu64-x86_64-cpu (pc)
> > -device s390-cpu (s390)
> > 
> > this patch introduces a new device named cpu-core that could be
> > used for all target archs as
> > 
> > -device cpu-core,socket="sid"
> > 
> > This adds a CPU core with all its associated threads into the
> > specified socket with id "sid". The number of target architecture
> > specific CPU threads that get created during this operation is based
> > on the CPU topology specified using -smp sockets=S,cores=C,threads=T
> > option. Also the number of cores that can be accommodated in the same
> > socket is dictated by the cores= parameter in the same -smp option.
> > 
> > CPU sockets are represented by QOM objects and the number of sockets
> > required to fit in max_cpus are created at boottime. As cpu-core
> > devices are created, they are linked to socket object specified by
> > socket="sid" device property.
> > 
> > Thus the model consists of backend socket objects which can be
> > considered as container of one or more cpu-core devices. Each
> > cpu-core object is linked to the appropriate backend socket object.
> > Each CPU thread device appears as child object of cpu-core device.
> > 
> > All the required socket objects are created upfront and they can't be
> > deleted. Though currently socket objects can be created using
> > object_add monitor command, I am planning to prevent that so that a
> > guest boots with the required number of sockets and only CPU cores
> > can be hotplugged into them.
> > 
> > CPU hotplug granularity
> > -----------------------
> > CPU hotplug will now be done in cpu-core device granularity.
> > 
> > This patchset includes a patch to prevent topologies that result in
> > partially filled cores. Hence with this patchset, we will always
> > have fully filled cpu-core devices both for boot time and during
> > hotplug.
> > 
> > For archs like PowerPC, where there is no requirement to be fully
> > similar to the physical system, hotplugging CPU at core granularity
> > is common. While core level hotplug will fit in naturally for such
> > archs, for others which want socket level hotplug, could higher level
> > tools like libvirt perform multiple core hotplugs in response to one
> > socket hotplug request ?
> > 
> > Are there archs that would need thread level CPU addition ?
> there are,
> currently x86 target allows to start QEMU with 1 thread even if
> topology specifies more threads per core. The same applies to hotplug.
> 
> On top of that I think ACPI spec also treats CPU devices on per threads
> level.
<snip>
> with this series it would regress following CLI
>   -smp 1,sockets=4,cores=2,threads=2,maxcpus=16

Yes, the first patch in this series explicitly prevents such topologies.

Though QEMU currently allows to have such topologies, as discussed
at http://lists.gnu.org/archive/html/qemu-devel/2015-12/msg00396.html
is there a need to continue support for such topologies ?
 
> 
> wrt CLI can't we do something like this?
> 
> -device some-cpu-model,socket=x[,core=y[,thread=z]]

We can, I just started with a simple homogenous setup. As David Gibson
pointed out elsewhere, instead of taking the topology from globals,
making it part of each -device command line like you show above would pave way
for heterogenous setup which probably would be needed in future. In such a
case, we wouldn't anyway have to debate about supporting topologies
with partially filled cores and sockets. Also supporting legacy cpu-add
x86 via device_add method would probably be easier with the semantics
you showed.

> 
> for NUMA configs individual sockets IDs could be bound to
> nodes via -numa ... option

For PowerPC, socket is not always a NUMA boundary, it can have
two CPU packages/chips (DCM)  within a socket and hence have two NUMA
levels within a socket.

> 
> and allow individual targets to use its own way to build CPUs?
> 
> For initial conversion of x86-cpus to device-add we could do pretty
> much the same like we do now, where cpu devices will appear under:
> /machine (pc-i440fx-2.5-machine)
>   /unattached (container)
>     /device[x] (qemu64-x86_64-cpu)
> 
> since we don't have to maintain/model dummy socket/core objects.
> 
> PowerPC could do the similar only at core level since it has
> need for modeling core objects.
> 
> It doesn't change anything wrt current introspection state, since
> cpus could be still found by mgmt tools that parse QOM tree.
> 
> We probably should split 2 conflicting goals we are trying to meet here,
> 
>  1. make device-add/dell work with cpus /
>      drop support for cpu-add in favor of device_add 
> 
>  2. how to model QOM tree view for CPUs in arch independent manner
>     to make mgmt layer life easier.
> 
> and work on them independently instead of arguing for years,
> that would allow us to make progress in #1 while still thinking about
> how to do #2 the right way if we really need it.

Makes sense, s390 developer also recommends the same. Given that we have
CPU hotplug patchsets from x86, PowerPC and s390 all implementing device_add
semantics pending on the list, can we hope to get them merged for
QEMU-2.6 ?

So as seen below, the device is either "cpu_model-cpu_type" or just "cpu_type".

-device POWER8-powerpc64-cpu (pseries)
-device qemu64-x86_64-cpu (pc)
-device s390-cpu (s390)

Is this going to be the final acceptable semantics ? Would libvirt be able
to work with this different CPU device names for different guests ?

Regards,
Bharata.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]