qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH V17 00/11] Add support for binding guest numa no


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [PATCH V17 00/11] Add support for binding guest numa nodes to host numa nodes
Date: Mon, 09 Dec 2013 18:33:41 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130923 Thunderbird/17.0.9

Il 06/12/2013 19:49, Marcelo Tosatti ha scritto:
>> > You'll have with your patches (without them it's worse of course):
>> > 
>> >    RAM offset    physical address   node 0
>> >    0-3840M       0-3840M            host node 0
>> >    4096M-4352M   4096M-4352M        host node 0
>> >    4352M-8192M   4352M-8192M        host node 1
>> >    3840M-4096M   8192M-8448M        host node 1
>> > 
>> > So only 0-3G and 5-8G are aligned, 3G-5G and 8G-8.25G cannot use
>> > gigabyte pages because they are split across host nodes.
> AFAIK the TLB caches virt->phys translations, why specifics of 
> a given phys address is a factor into TLB caching?

The problem is that "-numa mem" receives memory sizes and these do not
take into account the hole below 4G.

Thus, two adjacent host-physical addresses (two adjacent ram_addr_t-s)
map to very far guest-physical addresses, are assigned to different
guest nodes, and from there to different host nodes.  In the above
example this happens for 3G-5G.

On second thought, this is not particularly important, or at least not
yet.  It's not really possible to control the NUMA policy for
hugetlbfs-allocated memory, right?

>> > So rather than your patches, it seems simpler to just widen the PCI hole
>> > to 1G for i440FX and 2G for q35.
>> > 
>> > What do you think?
> 
> Problem is its a guest visible change. To get 1GB TLB entries with
> "legacy guest visible machine types" (which require new machine types
> at the host side, but invisible to guest), that won't work.
> Windows registration invalidation etc.

Yeah, that's a tradeoff to make.

Paolo

> But for q35, sure.
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]