qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH V4 00/10] Add support for binding guest numa nod


From: Eduardo Habkost
Subject: Re: [Qemu-devel] [PATCH V4 00/10] Add support for binding guest numa nodes to host numa nodes
Date: Thu, 11 Jul 2013 10:10:00 -0300
User-agent: Mutt/1.5.21 (2010-09-15)

On Thu, Jul 11, 2013 at 06:32:48PM +0800, Peter Huang(Peng) wrote:
> Hi,Wanlong
> 
> From the patch discription below,  seems that qemu numa only support 
> cpu/memory node binding.
> As we know, binding is not the common usage due to VM migration may happen or 
> the load balance
> would be disabled.
> So, do we have any plan of generating virtual numa automatically?
> 
> For example, if we create a 16vCPU VM on a 4 8-core node physical box, we can 
> automatically place
> it to two physical node, not by binding.

Do you mean automatically generating the NUMA topology configuration for
the VM, or automatically migrating between physical nodes?

The guest-visible NUMA topology is part of the VM configuration. If
automatically creating a config optimized for a specific host is
desired, that's a task for other tools (like libvirt, or tools built on
top of libvirt).

If you are talking about automatic migration of guest RAM and VCPUs,
eventually the kernel may be able to do it efficiently, but we don't
even have performance numbers for manually-tuned static binding setups
to compare with (because static binding is not possible yet).


> 
> On 2013-07-04 17:53, Wanlong Gao wrote:
> > As you know, QEMU can't direct it's memory allocation now, this may cause
> > guest cross node access performance regression.
> > And, the worse thing is that if PCI-passthrough is used,
> > direct-attached-device uses DMA transfer between device and qemu process.
> > All pages of the guest will be pinned by get_user_pages().
> >
> > KVM_ASSIGN_PCI_DEVICE ioctl
> >   kvm_vm_ioctl_assign_device()
> >     =>kvm_assign_device()
> >       => kvm_iommu_map_memslots()
> >         => kvm_iommu_map_pages()
> >            => kvm_pin_pages()
> >
> > So, with direct-attached-device, all guest page's page count will be +1 and
> > any page migration will not work. AutoNUMA won't too.
> >
> > So, we should set the guest nodes memory allocation policy before
> > the pages are really mapped.
> >
> > According to this patch set, we are able to set guest nodes memory policy
> > like following:
> >
> >  -numa node,nodeid=0,mem=1024,cpus=0,mem-policy=membind,mem-hostnode=0-1
> >  -numa node,nodeid=1,mem=1024,cpus=1,mem-policy=interleave,mem-hostnode=1
> >
> > This supports 
> > "mem-policy={membind|interleave|preferred},mem-hostnode=[+|!]{all|N-N}" 
> > like format.
> >
> > And patch 8/10 adds a QMP command "set-mpol" to set the memory policy for 
> > every
> > guest nodes:
> >     set-mpol nodeid=0 mem-policy=membind mem-hostnode=0-1
> >
> > And patch 9/10 adds a monitor command "set-mpol" whose format like:
> >     set-mpol 0 mem-policy=membind,mem-hostnode=0-1
> >
> > And with patch 10/10, we can get the current memory policy of each guest 
> > node
> > using monitor command "info numa", for example:
> >
> >     (qemu) info numa
> >     2 nodes
> >     node 0 cpus: 0
> >     node 0 size: 1024 MB
> >     node 0 mempolicy: membind=0,1
> >     node 1 cpus: 1
> >     node 1 size: 1024 MB
> >     node 1 mempolicy: interleave=1
> >
> >
> > V1->V2:
> >     change to use QemuOpts in numa options (Paolo)
> >     handle Error in mpol parser (Paolo)
> >     change qmp command format to mem-policy=membind,mem-hostnode=0-1 like 
> > (Paolo)
> > V2->V3:
> >     also handle Error in cpus parser (5/10)
> >     split out common parser from cpus and hostnode parser (Bandan 6/10)
> > V3-V4:
> >     rebase to request for comments
> >
> >
> > Bandan Das (1):
> >   NUMA: Support multiple CPU ranges on -numa option
> >
> > Wanlong Gao (9):
> >   NUMA: Add numa_info structure to contain numa nodes info
> >   NUMA: Add Linux libnuma detection
> >   NUMA: parse guest numa nodes memory policy
> >   NUMA: handle Error in cpus, mpol and hostnode parser
> >   NUMA: split out the common range parser
> >   NUMA: set guest numa nodes memory policy
> >   NUMA: add qmp command set-mpol to set memory policy for NUMA node
> >   NUMA: add hmp command set-mpol
> >   NUMA: show host memory policy info in info numa command
> >
> >  configure               |  32 ++++++
> >  cpus.c                  | 143 +++++++++++++++++++++++-
> >  hmp-commands.hx         |  16 +++
> >  hmp.c                   |  35 ++++++
> >  hmp.h                   |   1 +
> >  hw/i386/pc.c            |   4 +-
> >  hw/net/eepro100.c       |   1 -
> >  include/sysemu/sysemu.h |  20 +++-
> >  monitor.c               |  44 +++++++-
> >  qapi-schema.json        |  15 +++
> >  qemu-options.hx         |   3 +-
> >  qmp-commands.hx         |  35 ++++++
> >  vl.c                    | 285 
> > +++++++++++++++++++++++++++++++++++-------------
> >  13 files changed, 553 insertions(+), 81 deletions(-)
> >

-- 
Eduardo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]