qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 0/6] enable numa configuration before machine_init


From: Igor Mammedov
Subject: Re: [Qemu-devel] [RFC 0/6] enable numa configuration before machine_init() from HMP/QMP
Date: Wed, 18 Oct 2017 17:24:12 +0200

On Wed, 18 Oct 2017 15:49:36 +0100
"Daniel P. Berrange" <address@hidden> wrote:

> On Wed, Oct 18, 2017 at 04:44:35PM +0200, Igor Mammedov wrote:
> > On Wed, 18 Oct 2017 10:59:11 -0200
> > Eduardo Habkost <address@hidden> wrote:
> >   
> > > On Tue, Oct 17, 2017 at 06:18:59PM +0200, Igor Mammedov wrote:  
> > > > On Tue, 17 Oct 2017 17:09:26 +0100
> > > > "Daniel P. Berrange" <address@hidden> wrote:
> > > >     
> > > > > On Tue, Oct 17, 2017 at 06:06:35PM +0200, Igor Mammedov wrote:    
> > > > > > On Tue, 17 Oct 2017 16:07:59 +0100
> > > > > > "Daniel P. Berrange" <address@hidden> wrote:
> > > > > >       
> > > > > > > On Tue, Oct 17, 2017 at 09:27:02AM +0200, Igor Mammedov wrote:    
> > > > > > >   
> > > > > > > > On Mon, 16 Oct 2017 17:36:36 +0100
> > > > > > > > "Daniel P. Berrange" <address@hidden> wrote:
> > > > > > > >         
> > > > > > > > > On Mon, Oct 16, 2017 at 06:22:50PM +0200, Igor Mammedov 
> > > > > > > > > wrote:        
> > > > > > > > > > Series allows to configure NUMA mapping at runtime using 
> > > > > > > > > > QMP/HMP
> > > > > > > > > > interface. For that to happen it introduces a new '-paused' 
> > > > > > > > > > CLI option
> > > > > > > > > > which allows to pause QEMU before machine_init() is run and
> > > > > > > > > > adds new set-numa-node HMP/QMP commands which in conjuction 
> > > > > > > > > > with
> > > > > > > > > > info hotpluggable-cpus/query-hotpluggable-cpus allow to 
> > > > > > > > > > configure
> > > > > > > > > > NUMA mapping for cpus.          
> > > > > > > > > 
> > > > > > > > > What's the problem we're seeking solve here compared to what 
> > > > > > > > > we currently
> > > > > > > > > do for NUMA configuration ?        
> > > > > > > > From RHBZ1382425
> > > > > > > > "
> > > > > > > > Current -numa CLI interface is quite limited in terms that 
> > > > > > > > allow map
> > > > > > > > CPUs to NUMA nodes as it requires to provide cpu_index values 
> > > > > > > > which 
> > > > > > > > are non obvious and depend on machine/arch. As result libvirt 
> > > > > > > > has to
> > > > > > > > assume/re-implement cpu_index allocation logic to provide valid 
> > > > > > > > values for -numa cpus=... QEMU CLI option.        
> > > > > > > 
> > > > > > > In broad terms, this problem applies to every device / object 
> > > > > > > libvirt
> > > > > > > asks QEMU to create. For everything else libvirt is able to 
> > > > > > > assign a
> > > > > > > "id" string, which is can then use to identify the thing later. 
> > > > > > > The
> > > > > > > CPU stuff is different because libvirt isn't able to provide 'id'
> > > > > > > strings for each CPU - QEMU generates a psuedo-id internally which
> > > > > > > libvirt has to infer. The latter is the same problem we had with
> > > > > > > devices before '-device' was introduced allowing 'id' naming.
> > > > > > > 
> > > > > > > IMHO we should take the same approach with CPUs and start 
> > > > > > > modelling 
> > > > > > > the individual CPUs as something we can explicitly create with 
> > > > > > > -object
> > > > > > > or -device. That way libvirt can assign names and does not have 
> > > > > > > to 
> > > > > > > care about CPU index values, and it all works just the same way as
> > > > > > > any other devices / object we create
> > > > > > > 
> > > > > > > ie instead of:
> > > > > > > 
> > > > > > >   -smp 8,sockets=4,cores=2,threads=1
> > > > > > >   -numa node,nodeid=0,cpus=0-3
> > > > > > >   -numa node,nodeid=1,cpus=4-7
> > > > > > > 
> > > > > > > we could do:
> > > > > > > 
> > > > > > >   -object numa-node,id=numa0
> > > > > > >   -object numa-node,id=numa1
> > > > > > >   -object cpu,id=cpu0,node=numa0,socket=0,core=0,thread=0
> > > > > > >   -object cpu,id=cpu1,node=numa0,socket=0,core=1,thread=0
> > > > > > >   -object cpu,id=cpu2,node=numa0,socket=1,core=0,thread=0
> > > > > > >   -object cpu,id=cpu3,node=numa0,socket=1,core=1,thread=0
> > > > > > >   -object cpu,id=cpu4,node=numa1,socket=2,core=0,thread=0
> > > > > > >   -object cpu,id=cpu5,node=numa1,socket=2,core=1,thread=0
> > > > > > >   -object cpu,id=cpu6,node=numa1,socket=3,core=0,thread=0
> > > > > > >   -object cpu,id=cpu7,node=numa1,socket=3,core=1,thread=0      
> > > > > > the follow up question would be where do "socket=3,core=1,thread=0"
> > > > > > come from, currently these options are the function of
> > > > > > (-M foo -smp ...) and can be queried vi query-hotpluggble-cpus at
> > > > > > runtime after qemu parses -M and -smp options.      
> > > > >     
> > > 
> > > Also, note that in the case of NUMA, having identifiers for CPU
> > > objects themselves won't be enough. NUMA settings need
> > > identifiers for CPU slots (even if they are still empty), and
> > > those slots are provided by the machine, not created by the user.
> > > 
> > >   
> > > > > The sockets/cores/threads topology of CPUs is something that comes 
> > > > > from
> > > > > the libvirt guest XML config    
> > > > in this case things for libvirt to implement would be to know following 
> > > > details:
> > > >    1: which machine/machine version support which set of attributes
> > > >    2: valid values for these properties depending on machine/machine 
> > > > version/cpu type    
> > > 
> > > The big assumption in this series is that libvirt doesn't know in
> > > advance how the possible slots for CPUs will look like on each
> > > machine-type, and need to query them using
> > > query-hotpluggable-cpus.  
> > yep, that's true and it started with introduction of 'device_add cpu'
> > where libvirt didn't new what to specify as options for new cpu,
> > hence query-hotpluggable-cpus were added to provide that information.
> > 
> >   
> > > But if this assumption was really true, it would be impossible
> > > for the user to even decide how the NUMA topology will look like,
> > > wouldn't it?
> > > 
> > > Igor, are you able to give one example of how the user input
> > > (libvirt XML) for configuring NUMA CPU binding could look like if
> > > the user didn't know yet what the available sockets/cores/threads
> > > are?  
> > not sure I parse question but looking at libvirt's domain docs
> > it mentions
> >   <numa>
> >     <cell id='0' cpus='0-3' memory='512000' unit='KiB'/>
> >     <cell id='1' cpus='4-7' memory='512000' unit='KiB' memAccess='shared'/>
> >   </numa>
> > 
> > here libvirt assumes that there are cpus with cpu-index in range 0-7
> > /and probably duplicates logic that calculates cpu-index/
> > If libvirt would continue to duplicate logic we could skip on
> > implementing early runtime QMP in QEMU and also drop support for
> > query-hotpluggable-cpus as libvirt would be able to compute
> > properties/values on it's own.  
> 
> From the POV of the XML, these CPU numbers are *not* required to be
> the same as any QEMU CPU index. This is just saying that we've got
> a <vcpus>8</vcpu> element, and we want the first 4 CPUs in one node
> and the second 4 in the second node. 
> 
> If QEMU assigns CPU indexes 70-77 internally, that's not relevant to
> the XML POV, which uses 0-7 regardless. If there ever was such a
> disjoint representation of CPU indexes libvirt would have to remap
> whats in the XML to match whats in QEMU
that's what I'm saying, libvirt has to knows which cpu-indexes are valid
to use so it is able to build CLI which works:
  "-numa node,nodeid=0,cpus=0-3 -numa node,nodeid=1cpus=4-7"
and if algoritm that assigns cpu-indexes would change on QEMU side
it would break libvirt.

now to newer interface
  "-numa cpu,node-id=0,socket-id=0 -numa cpu,node-id=1,socket-id=1"
libvirt would had to know that socket-id and values 0-1 are valid,
now moving to spapr
  "-numa cpu,node-id=0,core-id=0 -numa cpu,node-id=1,core-id=8"
here valid values are not so obvious, core-id values are function
of "-smp"

this series was written so that mgmt won't have to duplicate logic
to match the same logic in qemu as libvirt didn't want to maintain
it, I'd assume because it's fragile. If libvirt would make up valid
properties/values on it's own we can forget about this series.

> Regards,
> Daniel




reply via email to

[Prev in Thread] Current Thread [Next in Thread]