qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v2 2/2] spapr: Memory hot-unplug support


From: Thomas Huth
Subject: Re: [Qemu-devel] [RFC PATCH v2 2/2] spapr: Memory hot-unplug support
Date: Fri, 29 Apr 2016 13:01:29 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.7.0

On 29.04.2016 10:30, Igor Mammedov wrote:
> On Fri, 29 Apr 2016 10:22:03 +0200
> Thomas Huth <address@hidden> wrote:
> 
>> On 29.04.2016 08:59, Bharata B Rao wrote:
>>> On Fri, Apr 29, 2016 at 08:45:37AM +0200, Thomas Huth wrote:  
>>>> On 29.04.2016 05:24, David Gibson wrote:  
>>>>> On Tue, Apr 26, 2016 at 04:03:37PM -0500, Michael Roth wrote:  
>>>> ...  
>>>>>> In the case of pseries, the DIMM abstraction isn't really exposed to
>>>>>> the guest, but rather the memory blocks we use to make the backing
>>>>>> memdev memory available to the guest. During unplug, the guest
>>>>>> completely releases these blocks back to QEMU, and if it can only
>>>>>> release a subset of what's requested it does not attempt to recover.
>>>>>> We can potentially change that behavior on the guest side, since
>>>>>> partially-freed DIMMs aren't currently useful on the host-side...
>>>>>>
>>>>>> But, in the case of pseries, I wonder if it makes sense to maybe go
>>>>>> ahead and MADV_DONTNEED the ranges backing these released blocks so the
>>>>>> host can at least partially reclaim the memory from a partially
>>>>>> unplugged DIMM?  
>>>>>
>>>>> Urgh.. I can see the benefit, but I'm a bit uneasy about making the
>>>>> DIMM semantics different in this way on Power.
>>>>>
>>>>> I'm shoehorning the PAPR DR memory mechanism into the qemu DIMM model
>>>>> was a good idea after all.  
>>>>
>>>> Ignorant question (sorry, I really don't have much experience yet here):
>>>> Could we maybe align the size of the LMBs with the size of the DIMMs?
>>>> E.g. make the LMBs bigger or the DIMMs smaller, so that they match?  
>>>
>>> Should work, but the question is what should be the right size so that
>>> we have good granularity of hotplug but also not run out of mem slots
>>> thereby limiting us on the maxmem. I remember you changed the memslots
>>> to 512 in KVM, but we are yet to move up from 32 in QEMU for sPAPR though.  
>>
>> Half of the slots should be "reserved" for PCI and other stuff, so we
>> could use 256 for memory - that way we would also on the same level as
>> x86 which also uses 256 memslots here, as far as I know.
>>
>> Anyway, couldn't we simply calculate the SPAPR_MEMORY_BLOCK_SIZE
>> dynamically, according to the maxmem and slot values that the user
>> specified? So that SPAPR_MEMORY_BLOCK_SIZE simply would match the DIMM
>> size? ... or is there some constraint that I've missed so that
>> SPAPR_MEMORY_BLOCK_SIZE has to be a compile-time #defined value?
> If you do that than possible DIMM size should be decided at startup
> and fixed. If DIMM of wrong size is plugged in machine should fail
> hotplug request.
> Question is how mgmt will know fixed DIMM size that sPAPR just calculated?

Ok, sorry, I somehow had that bad idea in mind that all DIMMs for
hot-plugging should have the same size. That's of course not the case if
we model something similar to DIMM plugging on real hardware. So please
never mind, it was just a wrong assumption on my side.

OTOH, it maybe also does not make sense to keep the LMB size always at
such a small, fixed value. Imagine the user specifies slots=32 and
maxmem=32G ... maybe we should then disallow plugging DIMMs that are
smaller than 1G, so we could use a LMB size of 1G in this case?
(plugging DIMMs of different size > 1G would then still be allowed, too,
of course)

 Thomas




reply via email to

[Prev in Thread] Current Thread [Next in Thread]