qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] spapr: manage hotplugged devices while the VM i


From: Laurent Vivier
Subject: Re: [Qemu-devel] [PATCH] spapr: manage hotplugged devices while the VM is not started
Date: Wed, 31 May 2017 09:12:02 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.1.0

On 31/05/2017 06:35, David Gibson wrote:
> On Tue, May 30, 2017 at 06:04:45PM +0200, Laurent Vivier wrote:
>> For QEMU, a hotlugged device is a device added using the HMP/QMP
>> interface.
>> For SPAPR, a hotplugged device is a device added while the
>> machine is running. In this case QEMU doesn't update internal
>> state but relies on the OS for this part
>>
>> In the case of migration, when we (libvirt) hotplug a device
>> on the source guest, we (libvirt) generally hotplug the same
>> device on the destination guest. But in this case, the machine
>> is stopped (RUN_STATE_INMIGRATE) and QEMU must not expect
>> the OS will manage it as an hotplugged device as it will
>> be "imported" by the migration.
>>
>> This patch changes the meaning of "hotplugged" in spapr.c
>> to manage a QEMU hotplugged device like a "coldplugged" one
>> when the machine is awaiting an incoming migration.
>>
>> Signed-off-by: Laurent Vivier <address@hidden>
> 
> So, I think this is a reasonable concept, at least in terms of
> cleanliness and not doing unnecessary work.  However, if it's fixing
> bugs, I suspect that means we still have problems elsewhere.
> 
> Specifically, what is it we're doing before the incoming migration
> that's breaking things.  Even if it's unnecessary, anything done there
> should be overwritten by the incoming stream.  That should certainly
> be the case (now) for the DRC state variables.  Maybe not for the
> queued hotplug events - but that means we should update the queue
> migration to make sure we clear anything existing on the destination
> before adding migrated events.
> 
> I'm also concerned by the fact that this makes changes for memory and
> cpu hotplug, but not for PCI devices.  Why aren't they also affected
> by this problem?

There are some specific tests for PCI that change the behavior.

For instance, see hw/ppc/spapr_drc.c,

set_allocation_state()

151     if (drc->type != SPAPR_DR_CONNECTOR_TYPE_PCI) {
152         drc->allocation_state = state;
153         if (drc->awaiting_release &&
154             drc->allocation_state == SPAPR_DR_ALLOCATION_STATE_UNUSABLE) {
155             trace_spapr_drc_set_allocation_state_finalizing(get_index(drc));
156             drck->detach(drc, DEVICE(drc->dev), drc->detach_cb,
157                          drc->detach_cb_opaque, NULL);
158         } else if (drc->allocation_state == 
SPAPR_DR_ALLOCATION_STATE_USABLE) {
159             drc->awaiting_allocation = false;
160         }
161     }

attach():

394     drc->signalled = (drc->type != SPAPR_DR_CONNECTOR_TYPE_PCI)
395                      ? true : coldplug;
396 
397     if (drc->type != SPAPR_DR_CONNECTOR_TYPE_PCI) {
398         drc->awaiting_allocation = true;
399     }
400 

detach()

442     if (drc->type != SPAPR_DR_CONNECTOR_TYPE_PCI &&
443         drc->allocation_state != SPAPR_DR_ALLOCATION_STATE_UNUSABLE) {
444         trace_spapr_drc_awaiting_unusable(get_index(drc));
445         drc->awaiting_release = true;
446         return;
447     }

and more...

> 
> One nit in the implementation, see below:

I agree, will fix.

Thanks,
Laurent



reply via email to

[Prev in Thread] Current Thread [Next in Thread]