qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH 2/2] numa: Add node_id data in query-hotplug


From: David Gibson
Subject: Re: [Qemu-devel] [RFC PATCH 2/2] numa: Add node_id data in query-hotpluggable-cpus
Date: Tue, 12 Jul 2016 13:27:41 +1000

On Fri, 8 Jul 2016 09:46:00 +0200
Peter Krempa <address@hidden> wrote:

> On Fri, Jul 08, 2016 at 12:23:08 +1000, David Gibson wrote:
> > On Thu,  7 Jul 2016 17:17:14 +0200
> > Peter Krempa <address@hidden> wrote:
> >   
> > > Add a helper that looks up the NUMA node for a given CPU and use it to
> > > fill the node_id in the PPC and X86 impls of query-hotpluggable-cpus.  
> > 
> > 
> > IIUC how the query thing works this means that the node id issued by
> > query-hotpluggable-cpus will be echoed back to device add by libvirt.  
> 
> It will be echoed back, but the problem is that it's configured
> separately ...
> 
> > I'm not sure we actually process that information in the core at
> > present, so I don't know that that's right.
> > 
> > We need to be clear on which direction information is flowing here.
> > Does query-hotpluggable-cpus *define* the NUMA node allocation which is
> > then passed to the core device which implements it.  Or is the NUMA
> > allocation defined elsewhere, and query-hotpluggable-cpus just reports
> > it.  
> 
> Currently (in the pre-hotplug era) the NUMA topology is defined by
> specifying a CPU numbers (see previous patch) on the commandline:
> 
> -numa node=1,cpus=1-5,cpus=8,cpus=11...
> 
> This is then reported to the guest.
> 
> For a machine started with:
> 
> -smp 5,maxcpus=8,sockets=2,cores=2,threads=2
> -numa node,nodeid=0,cpus=0,cpus=2,cpus=4,cpus=6,mem=500
> -numa node,nodeid=1,cpus=1,cpus=3,cpus=5,cpus=7,mem=500
> 
> you get the following topology that is not really possible with
> hardware:
> 
> # lscpu
> Architecture:          x86_64
> CPU op-mode(s):        32-bit, 64-bit
> Byte Order:            Little Endian
> CPU(s):                5
> On-line CPU(s) list:   0-4
> Thread(s) per core:    1
> Core(s) per socket:    2
> Socket(s):             2
> NUMA node(s):          2
> Vendor ID:             GenuineIntel
> CPU family:            6
> Model:                 6
> Model name:            QEMU Virtual CPU version 2.5+
> Stepping:              3
> CPU MHz:               3504.318
> BogoMIPS:              7008.63
> Hypervisor vendor:     KVM
> Virtualization type:   full
> L1d cache:             32K
> L1i cache:             32K
> L2 cache:              4096K
> NUMA node0 CPU(s):     0,2,4
> NUMA node1 CPU(s):     1,3
> 
> Note that the count of cpus per numa node does not need to be identical.
> 
> As of the above 'query-hotpluggable-cpus' will need to report the data
> that was configured above even if it doesn't make much sense in a real
> world.
> 
> I did not try the above on a PPC host and thus I'm not sure whether the
> config above is allowed or not.

It's not - although I'm not sure that we actually have something
enforcing this.

However, single cores *must* be in the same NUMA node - there's no way
of reporting to the guest anything finer grained.

> While for the hotplug cpus it would be possible to plug in the correct
> one according to the requested use the queried data but with the current
> approach it's impossible to set the initial vcpus differently.
> 
> Peter
> 
> Note: For libvirt it's a no-go to start a throwaway qemu process just to
> query the information and thus it's desired to have a way to configure
> all this without the need to query with a specific machine
> type/topology.


-- 
David Gibson <address@hidden>
Senior Software Engineer, Virtualization, Red Hat

Attachment: pgpDNWMufJLKt.pgp
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]