qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] QMP: add query-hotpluggable-cpus


From: Igor Mammedov
Subject: Re: [Qemu-devel] [RFC] QMP: add query-hotpluggable-cpus
Date: Fri, 19 Feb 2016 16:49:11 +0100

On Fri, 19 Feb 2016 15:38:48 +1100
David Gibson <address@hidden> wrote:

CCing thread a couple of libvirt guys.

> On Thu, Feb 18, 2016 at 11:37:39AM +0100, Igor Mammedov wrote:
> > On Thu, 18 Feb 2016 14:39:52 +1100
> > David Gibson <address@hidden> wrote:
> >   
> > > On Tue, Feb 16, 2016 at 11:36:55AM +0100, Igor Mammedov wrote:  
> > > > On Mon, 15 Feb 2016 20:43:41 +0100
> > > > Markus Armbruster <address@hidden> wrote:
> > > >     
> > > > > Igor Mammedov <address@hidden> writes:
> > > > >     
> > > > > > it will allow mgmt to query present and possible to hotplug CPUs
> > > > > > it is required from a target platform that wish to support
> > > > > > command to set board specific MachineClass.possible_cpus() hook,
> > > > > > which will return a list of possible CPUs with options
> > > > > > that would be needed for hotplugging possible CPUs.
> > > > > >
> > > > > > For RFC there are:
> > > > > >    'arch_id': 'int' - mandatory unique CPU number,
> > > > > >                       for x86 it's APIC ID for ARM it's MPIDR
> > > > > >    'type': 'str' - CPU object type for usage with device_add
> > > > > >
> > > > > > and a set of optional fields that would allows mgmt tools
> > > > > > to know at what granularity and where a new CPU could be
> > > > > > hotplugged;
> > > > > > [node],[socket],[core],[thread]
> > > > > > Hopefully that should cover needs for CPU hotplug porposes for
> > > > > > magor targets and we can extend structure in future adding
> > > > > > more fields if it will be needed.
> > > > > >
> > > > > > also for present CPUs there is a 'cpu_link' field which
> > > > > > would allow mgmt inspect whatever object/abstraction
> > > > > > the target platform considers as CPU object.
> > > > > >
> > > > > > For RFC purposes implements only for x86 target so far.      
> > > > > 
> > > > > Adding ad hoc queries as we go won't scale.  Could this be solved by a
> > > > > generic introspection interface?    
> > > > Do you mean generic QOM introspection?
> > > > 
> > > > Using QOM we could have '/cpus' container and create QOM links
> > > > for exiting (populated links) and possible (empty links) CPUs.
> > > > However in that case link's name will need have a special format
> > > > that will convey an information necessary for mgmt to hotplug
> > > > a CPU object, at least:
> > > >   - where: [node],[socket],[core],[thread] options
> > > >   - optionally what CPU object to use with device_add command    
> > > 
> > > Hmm.. is it not enough to follow the link and get the topology
> > > information by examining the target?  
> > One can't follow a link if it's an empty one, hence
> > CPU placement information should be provided somehow,
> > either:  
> 
> Ah, right, so the issue is determining the socket/core/thread
> addresses that cpus which aren't yet present will have.
> 
> >  * by precreating cpu-package objects with properties that
> >    would describe it /could be inspected via OQM/  
> 
> So, we could do this, but I think the natural way would be to have the
> information for each potential thread in the package.  Just putting
> say "core number" in the package itself assumes more than I'd like
> about how packages sit in the heirarchy.  Plus, it means that
> management has a bunch of cases to deal with: package has all the
> information, package has just a core id, package has just a socket id,
> and so forth.
> 
> It is a but clunky that when the package is plugged, this information
> will have to sit parallel to the array of actual thread links.
>
> Markus or Andreas is there a natural way to present a list of (node,
> socket, core, thread) tuples in the package object?  Preferably
> without having to create a whole bunch of "potential thread" objects
> just for the purpose.
I'm sorry but I couldn't parse above 2 paragraphs. The way I see
whatever placement info QEMU will provide to mgmt, mgmt will have
to deal with it in one way or another.
Perhaps rephrasing and adding some examples might help to explain
suggestion a bit better?

> 
> > or
> >  * via QMP/HMP command that would provide the same information
> >    only without need to precreate anything. The only difference
> >    is that it allows to use -device/device_add for new CPUs.  
> 
> I'd be ok with that option as well.  I'd be thinking it would be
> implemented via a class method on the package object which returns the
> addresses that its contained threads will have, whether or not they're
> present right now.  Does that make sense?
In this RFC it's MachineClass.possible_cpus method which is a bit more
flexible as it allows a board to describe possible CPU devices (whatever
they might be: sockets|cores|threads|some_chip_module) and their properties
without forcing board to precreate cpu_package objects which should convey
the same info one way or another.


> > Considering that we would need to create HMP command so user could
> > inspect possible CPUs from monitor, it would need to do the same as
> > QMP command regardless of whether it's cpu-package objects or
> > just board calculated info a runtime.
> >    
> > > In the design Eduardo and I have been discussing we're actually not
> > > planning to allow device_add to construct CPU packages - at least, not
> > > for the time being.  The idea is that the machine type will construct
> > > enough packages for maxcpus, and management just toggles them on and
> > > off.  
> > Another question is how it would work wrt migration?  
> 
> I'm assuming the "present" bits would be added to the migration
> stream; seems straightforward enough to me.  Is there some
> consideration I'm missing?
It's hard to estimate how cpu-package objects might complicate
migration. It should not break migration for old machine types
and if possible it should work for backwards migration to older
QEMU versions (to be downstream friendly).

If we go typical '-device/device_add whatever_cpu_device,foo_options_list'
route then it would allow us to replicate older device models without
issues (I don't expect any in x86 case) as it's what CPUs are now under the 
hood.
This RFC doesn't force us to re-factor device models in order to use
hotplug (where CPU objects are already self-sufficient devices/hotplug capable).

It rather tries completely split interface aspect from how we are
internally model CPU hotplug, and tries to solve issue with

 -device/device_add for which we need to provide
   'what type to plug' and 'where to plug, which options to set to what'

It's 1st level per you proposal, later we can do 2nd level on top of it
using cpu-packages(flip present property) to simplify mgmt's job
if it still would really needed (i.e. mgmt won't be able to cope with
-device, which it already has support for).

> 
> > > We can eventually allow construction of new packages with device_add,
> > > but for now that gets hidden inside the platform until we've worked
> > > out more details.
> > >   
> > > > Another approach to do QOM introspection would be to model hierarchy 
> > > > of objects like node/socket/core..., That's what Andreas
> > > > worked on. Only it still suffers the same issue as above
> > > > wrt introspection and hotplug, One can pre-create empty
> > > > [nodes][sockets[cores]] containers at startup but then
> > > > leaf nodes that could be hotplugged would be a links anyway
> > > > and then again we need to give them special formatted names
> > > > (not well documented at that mgmt could make sense of).
> > > > That hierarchy would need to become stable ABI once
> > > > mgmt will start using it and QOM tree is quite unstable
> > > > now for that. For some targets it involves creating dummy
> > > > containers like node/socket/core for x86 where just modeling
> > > > a thread is sufficient.    
> > > 
> > > I'd prefer to avoid exposing the node/socket/core heirarchy through
> > > the QOM interfaces as much as possible.  Although all systems I know
> > > of have a heirarchy something like that, exactly what the levels may
> > > vary, so I think it's better not to bake that into our interface.
> > > 
> > > Properties giving core/socket/node id values isn't too bad, but
> > > building a whole tree mirroring that heirarchy seems like asking for
> > > trouble.  
> > It's ok to have flat array of cpu-packages as well, only that
> > they should provide mgmt with information that would say where
> > CPU is could be plugged (meaning: node/socket/core/thread 
> > and/or some other properties, I guess it's target dependent thing)
> > so that user could select where CPU goes and do other actions
> > after plugging it, like pinning VCPU threads to a correct host
> > node/cpu.  
> 
> Right, that makes sense.  Again, it's basically about knowing where
> new cpu threads will end up before they're actually plugged in.
> 
> >   
> > >   
> > > > The similar but a bit more abstract approach was suggested
> > > > by David 
> > > > https://lists.gnu.org/archive/html/qemu-ppc/2016-02/msg00000.html
> > > > 
> > > > Benefit of dedicated CPU hotplug focused QMP command is that
> > > > it can be quite abstract to suite most targets and not depend
> > > > on how a target models CPUs internally and still provide
> > > > information needed for hotplugging a CPU object.
> > > > That way we can split efforts on how we model/refactor CPUs
> > > > internally and how mgmt would work with them using
> > > > -device/device_add.
> > > >     
> > >   
> >   
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]