qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] QMP: add query-hotpluggable-cpus


From: Igor Mammedov
Subject: Re: [Qemu-devel] [RFC] QMP: add query-hotpluggable-cpus
Date: Fri, 19 Feb 2016 17:11:15 +0100

On Fri, 19 Feb 2016 10:51:11 +0100
Markus Armbruster <address@hidden> wrote:

> David Gibson <address@hidden> writes:
> 
> > On Thu, Feb 18, 2016 at 11:37:39AM +0100, Igor Mammedov wrote:  
> >> On Thu, 18 Feb 2016 14:39:52 +1100
> >> David Gibson <address@hidden> wrote:
> >>   
> >> > On Tue, Feb 16, 2016 at 11:36:55AM +0100, Igor Mammedov wrote:  
> >> > > On Mon, 15 Feb 2016 20:43:41 +0100
> >> > > Markus Armbruster <address@hidden> wrote:
> >> > >     
> >> > > > Igor Mammedov <address@hidden> writes:
> >> > > >     
> >> > > > > it will allow mgmt to query present and possible to hotplug CPUs
> >> > > > > it is required from a target platform that wish to support
> >> > > > > command to set board specific MachineClass.possible_cpus() hook,
> >> > > > > which will return a list of possible CPUs with options
> >> > > > > that would be needed for hotplugging possible CPUs.
> >> > > > >
> >> > > > > For RFC there are:
> >> > > > >    'arch_id': 'int' - mandatory unique CPU number,
> >> > > > >                       for x86 it's APIC ID for ARM it's MPIDR
> >> > > > >    'type': 'str' - CPU object type for usage with device_add
> >> > > > >
> >> > > > > and a set of optional fields that would allows mgmt tools
> >> > > > > to know at what granularity and where a new CPU could be
> >> > > > > hotplugged;
> >> > > > > [node],[socket],[core],[thread]
> >> > > > > Hopefully that should cover needs for CPU hotplug porposes for
> >> > > > > magor targets and we can extend structure in future adding
> >> > > > > more fields if it will be needed.
> >> > > > >
> >> > > > > also for present CPUs there is a 'cpu_link' field which
> >> > > > > would allow mgmt inspect whatever object/abstraction
> >> > > > > the target platform considers as CPU object.
> >> > > > >
> >> > > > > For RFC purposes implements only for x86 target so far.      
> >> > > > 
> >> > > > Adding ad hoc queries as we go won't scale.  Could this be solved by 
> >> > > > a
> >> > > > generic introspection interface?    
> >> > > Do you mean generic QOM introspection?
> >> > > 
> >> > > Using QOM we could have '/cpus' container and create QOM links
> >> > > for exiting (populated links) and possible (empty links) CPUs.
> >> > > However in that case link's name will need have a special format
> >> > > that will convey an information necessary for mgmt to hotplug
> >> > > a CPU object, at least:
> >> > >   - where: [node],[socket],[core],[thread] options
> >> > >   - optionally what CPU object to use with device_add command    
> >> > 
> >> > Hmm.. is it not enough to follow the link and get the topology
> >> > information by examining the target?  
> >> One can't follow a link if it's an empty one, hence
> >> CPU placement information should be provided somehow,
> >> either:  
> >
> > Ah, right, so the issue is determining the socket/core/thread
> > addresses that cpus which aren't yet present will have.
> >  
> >>  * by precreating cpu-package objects with properties that
> >>    would describe it /could be inspected via OQM/  
> >
> > So, we could do this, but I think the natural way would be to have the
> > information for each potential thread in the package.  Just putting
> > say "core number" in the package itself assumes more than I'd like
> > about how packages sit in the heirarchy.  Plus, it means that
> > management has a bunch of cases to deal with: package has all the
> > information, package has just a core id, package has just a socket id,
> > and so forth.
> >
> > It is a but clunky that when the package is plugged, this information
> > will have to sit parallel to the array of actual thread links.
> >
> > Markus or Andreas is there a natural way to present a list of (node,
> > socket, core, thread) tuples in the package object?  Preferably
> > without having to create a whole bunch of "potential thread" objects
> > just for the purpose.  
> 
> I'm just a dabbler when it comes to QOM, but I can try.
> 
> I view a concrete cpu-package device (subtype of the abstract
> cpu-package device) as a composite device containing stuff like actual
> cores.
> 
> To create a composite device, you start with the outer shell, then plug
> in components one by one.  Components can be nested arbitrarily deep.
> 
> Perhaps you can define the concrete cpu-package shell in a way that lets
> you query what you need to know from a mere shell (no components
> plugged).
> 
> >> or
> >>  * via QMP/HMP command that would provide the same information
> >>    only without need to precreate anything. The only difference
> >>    is that it allows to use -device/device_add for new CPUs.  
> >
> > I'd be ok with that option as well.  I'd be thinking it would be
> > implemented via a class method on the package object which returns the
> > addresses that its contained threads will have, whether or not they're
> > present right now.  Does that make sense?  
> 
> If you model CPU packages as composite cpu-package devices, then you
> should be able to plug and unplug these with device_add, unless plugging
> them requires complex wiring that can't be done in qdev / device_add,
> yet.
If cpu-package would be device then it would suffer from the same issues,
'what type name package has' & 'where is ti being plugged set of properties'
this RFC tries to answer to above questions for CPU devices and letting
board to decide what those CPU devices are (sockets|cores|threads|...)
without intermediate cpu-packages.

Possible cpu-packages should be precreated at machine startup time
so that later mgmt could flip 'present' property there to create
actual CPU objects. At least that's how I've understood David's
interface proposal 'Layer 2: Higher-level'
https://lists.gnu.org/archive/html/qemu-ppc/2016-02/msg00000.html
wrt hotplug.

> 
> If that's the case, a general solution for "device needs complex wiring"
> would be more useful than a one-off for CPU packages.
> 
> [...]
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]