qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v2 2/2] spapr: Memory hot-unplug support


From: Nathan Fontenot
Subject: Re: [Qemu-devel] [RFC PATCH v2 2/2] spapr: Memory hot-unplug support
Date: Thu, 24 Mar 2016 09:15:58 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0

On 03/22/2016 10:22 PM, David Gibson wrote:
> On Wed, Mar 16, 2016 at 10:11:54AM +0530, Bharata B Rao wrote:
>> On Wed, Mar 16, 2016 at 12:36:05PM +1100, David Gibson wrote:
>>> On Tue, Mar 15, 2016 at 10:08:56AM +0530, Bharata B Rao wrote:
>>>> Add support to hot remove pc-dimm memory devices.
>>>>
>>>> Signed-off-by: Bharata B Rao <address@hidden>
>>>
>>> Reviewed-by: David Gibson <address@hidden>
>>>
>>> Looks correct, but again, needs to wait on the PAPR change.
>>>
>>> Have you thought any further on the idea of sending an index message,
>>> then a count message as an interim approach to fixing this without
>>> requiring a PAPR change?
>>
>> Removal by index and removal by count are valid messages by themselves
>> and drmgr would go ahead and start the removal in reponse to those
>> calls. IIUC, you are suggesting that lets remove one LMB by index in
>> response to 1st message and remove (count -1) LMBs from where the last
>> removal was done in the previous message.
> 
> Yes, that's the idea.
> 
>> Since the same code base of powerpc-utils works on PowerVM too, I am not
>> sure if such an approach would impact PowerVM in any undesirable manner.
>> May be Nathan can clarify ?

The issue I see with this approach is that there is no way in the current
drmgr code to correlate the two memory remove requests. If I understand
what you are asking to do correctly, this would result in two separate
invocations of drmgr. The first to remove a specific LMB by index, this
index then needing to be saved somewhere, then a second invocation that
would retrieve the index and remove count-1 LMBs.

Would there be anything tying these two requests together? or would we
assume that two requests received in this order are correlated?

What happens if another request comes in in between these two requests?
I see this as being a pretty rare possibility, but it is a possibility.

> 
> Heard anything from Nathan?  I don't really see how it would be bad
> under PowerVM.  Under PowerVM it generally doesn't matter which LMBs
> you remove, right?  So removing the ones immediately after the last
> one you removed should be as good a choice as any.

This shouldn't hurt anything for PowerVM systems. In general the only
time a specific LMB is specified for PowerVM systems is on memory guard
operations.

> 
>> I see that this can be done, but the changes in drmgr code specially the
>> code related to LMB list handling/removal can be non-trivial. So not sure
>> if the temporary approach is all that worth here and hence I feel it is 
>> better
>> to wait and do it the count-indexed way.
> 
> Really?  drmgr is already scanning LMBs to find ones it can remove.
> Seeding that scan with the last removed LMB shouldn't be too hard.

This shouldn't be difficult to implement in the drmgr code. We already
search a list of LMBs to find ones to remove, updating to just return
the LMB with the next sequential index shouldn't be difficult.

-Nathan

> 
>> While we are here, I would also like to get some opinion on the real
>> need for memory unplug. Is there anything that memory unplug gives us
>> which memory ballooning (shrinking mem via ballooning) can't give ?
> 
> Hmm.. that's an interesting question.  
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]