qemu-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

use of hugepages on NUMA system


From: Roman Mashak
Subject: use of hugepages on NUMA system
Date: Wed, 27 Jan 2021 11:02:42 -0500

I have a machine with 4 Numa nodes and is booted with kernel boot
parameter default_hugepagesz=1G. I'm launching qemu with libvirt, and
I can see that qemu starts with the following parameters:

-m 65536 ... -mem-prealloc -mem-path /mnt/hugepages/libvirt/qemu

i.e. start the virtual machine with 64GB of memory and request it to
allocate the guest memory from a temporarily created file in
/mnt/hugepages/libvirt/qemu.

Since my VM workload is actually pinned to a set of cores on a single
numa node 0 using <vcpupin> element, I thought it'd be good idea to
enforce Qemu to allocate memory from the the same numa node:

<numtune>
   <memory mode="strict" nodeset="0">
</numtune>

However this didn't work, qemu returned error in its log:

os_mem_prealloc insufficient free host memory pages available to
allocate guest ram

I'm not sure why this happened. System memory info and NUMA stats
confirm that host's memory of 512GB is equally split across the numa
nodes, and hugepages are also equally distributed across the nodes :

% fgrep Huge /proc/meminfo
AnonHugePages:    270336 kB
ShmemHugePages:        0 kB
HugePages_Total:     113
HugePages_Free:       49
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:    1048576 kB
Hugetlb:        118489088 kB
%
% numastat -cm -p `pidof qemu-system-x86_64`
Per-node process memory usage (in MBs) for PID 3365 (qemu-system-x86)
         Node 0 Node 1 Node 2 Node 3 Total
         ------ ------ ------ ------ -----
Huge      29696   7168      0  28672 65536
Heap          0      0      0     31    31
Stack         0      0      0      0     0
Private       4      9      4    305   322
-------  ------ ------ ------ ------ -----
Total     29700   7177      4  29008 65889
...
                 Node 0 Node 1 Node 2 Node 3  Total
                 ------ ------ ------ ------ ------
MemTotal         128748 129017 129017 129004 515785
MemFree           98732  97339 100060  95848 391979
MemUsed           30016  31678  28957  33156 123807
...
AnonHugePages         0      4      0    260    264
HugePages_Total   29696  28672  28672  28672 115712
HugePages_Free        0  21504  28672      0  50176
HugePages_Surp        0      0      0      0      0
%

Any hints on the reasons of this behaviour?
Thanks!



reply via email to

[Prev in Thread] Current Thread [Next in Thread]