qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH] Exporting Guest RAM information for NUMA bi


From: Peter Zijlstra
Subject: Re: [Qemu-devel] [RFC PATCH] Exporting Guest RAM information for NUMA binding
Date: Mon, 21 Nov 2011 16:25:26 +0100

On Mon, 2011-11-21 at 20:48 +0530, Bharata B Rao wrote:

> I looked at Peter's recent work in this area.
> (https://lkml.org/lkml/2011/11/17/204)
> 
> It introduces two interfaces:
> 
> 1. ms_tbind() to bind a thread to a memsched(*) group
> 2. ms_mbind() to bind a memory region to memsched group
> 
> I assume the 2nd interface could be used by QEMU to create
> memsched groups for each of guest NUMA node memory regions.

No, you would need both, you'll need to group vcpu threads _and_ some
vaddress space together.

I understood QEMU currently uses a single big anonymous mmap() to
allocate the guest memory, using this you could either use multiple or
carve up the big alloc into virtual nodes by assigning different parts
to different ms groups.

Example: suppose you want to create a 2 node guest with 8 vcpus, create
2 ms groups, each with 4 vcpu threads and assign half the total guest
mmap to either.

> In the past, Anthony has said that NUMA binding should be done from outside
> of QEMU (http://www.kerneltrap.org/mailarchive/linux-kvm/2010/8/31/6267041)

If you want to expose a sense of virtual NUMA to your guest you really
have no choice there. The only thing you can do externally is run whole
VMs inside one particular node.

> Though that was in a different context, may be we should re-look at that
> and see if QEMU still sticks to that. I know its a bit early, but if needed
> we should ask Peter to consider extending ms_mbind() to take a tid parameter
> too instead of working on current task by default.

Uh, what for? ms_mbind() works on the current process, not task.

> (*) memsched: An abstraction for representing coupling of threads with virtual
> address ranges. Threads and virtual address ranges of a memsched group are
> guaranteed (?) to be located on the same node.

Yeah, more or less so. We could relax that slightly to allow tasks to
run away from the node for very short periods of time, but basically
that's the provided guarantee.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]