qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-ppc] [PATCH 3/4] spapr: disable hotplugging witho


From: David Gibson
Subject: Re: [Qemu-devel] [Qemu-ppc] [PATCH 3/4] spapr: disable hotplugging without OS
Date: Wed, 31 May 2017 16:36:34 +1000
User-agent: Mutt/1.8.0 (2017-02-23)

On Tue, May 30, 2017 at 12:15:59PM -0500, Michael Roth wrote:
> Quoting David Gibson (2017-05-24 22:16:26)
> > On Wed, May 24, 2017 at 12:40:37PM -0500, Michael Roth wrote:
> > > Quoting Laurent Vivier (2017-05-24 11:02:30)
> > > > On 24/05/2017 17:54, Greg Kurz wrote:
> > > > > On Wed, 24 May 2017 12:14:02 +0200
> > > > > Igor Mammedov <address@hidden> wrote:
> > > > > 
> > > > >> On Wed, 24 May 2017 11:28:57 +0200
> > > > >> Greg Kurz <address@hidden> wrote:
> > > > >>
> > > > >>> On Wed, 24 May 2017 15:07:54 +1000
> > > > >>> David Gibson <address@hidden> wrote:
> > > > >>>   
> > > > >>>> On Tue, May 23, 2017 at 01:18:11PM +0200, Laurent Vivier wrote:    
> > > > >>>>> If the OS is not started, QEMU sends an event to the OS
> > > > >>>>> that is lost and cannot be recovered. An unplug is not
> > > > >>>>> able to restore QEMU in a coherent state.
> > > > >>>>> So, while the OS is not started, disable CPU and memory hotplug.
> > > > >>>>> We use option vector 6 to know if the OS is started
> > > > >>>>>
> > > > >>>>> Signed-off-by: Laurent Vivier <address@hidden>      
> > > > >>>>
> > > > >>>> Urgh.. I'm not terribly confident that this is really correct.  As
> > > > >>>> discussed on the previous patch, you're essentially using OV6 as a
> > > > >>>> flag that CAS is complete.
> > > > >>>>
> > > > >>>> But while it undoubtedly makes the race window much smaller, I 
> > > > >>>> don't
> > > > >>>> see that there's any guarantee the guest OS will really be able to
> > > > >>>> handle hotplug events immediately after CAS.
> > > > >>>>
> > > > >>>> In particular if the CAS process completes partially but then 
> > > > >>>> needs to
> > > > >>>> trigger a reboot, I think that would end up setting the ov6 
> > > > >>>> variable,
> > > > >>>> but the OS would definitely not be in a state to accept events.  
> > > > >> wouldn't guest on reboot pick up updated fdt and online hotplugged
> > > > >> before crash cpu along with initial cpus?
> > > > >>
> > > > > 
> > > > > Yes and that's what actually happens with cpus.
> > > > > 
> > > > > But catching up with the background for this series, I have the
> > > > > impression that the issue isn't the fact we loose an event if the OS
> > > > > isn't started (which is not true), but more something wrong happening
> > > > > when hotplugging+unplugging memory as described in this commit:
> > > > > 
> > > > > commit fe6824d12642b005c69123ecf8631f9b13553f8b
> > > > > Author: Laurent Vivier <address@hidden>
> > > > > Date:   Tue Mar 28 14:09:34 2017 +0200
> > > > > 
> > > > >     spapr: fix memory hot-unplugging
> > > > > 
> > > > 
> > > > Yes, this commit try to fix that, but it's not possible. Some objects
> > > > remain in memory: you can see with "info cpus" or "info memory-devices"
> > > > that they are not really removed, and this prevents to hotplug them
> > > > again, and moreover in the case of the memory hot-unplug we can rerun
> > > > the device_del and crash qemu (as before the fix).
> > > > 
> > > > Moreover all stuff normally cleared in detach() are not, and we can't do
> > > > it later in set_allocation_state() because some are in use by the
> > > > kernel, and this is the last call from the kernel.
> > > 
> > > Focusing on the hotplug/add case, it's a bit odd that the guest would be
> > > using the memory even though the hotplug event is clearly still sitting
> > > in the queue.
> > > 
> > > I think part of the issue is us not having a clear enough distinction in
> > > the code between what constitutes the need for "boot-time" handling vs.
> > > "hotplug" handling.
> > > 
> > > We have this hook in spapr_add_lmbs:
> > > 
> > >     if (!dev->hotplugged) {
> > >         /* guests expect coldplugged LMBs to be pre-allocated */
> > >         drck->set_allocation_state(drc, SPAPR_DR_ALLOCATION_STATE_USABLE);
> > >         drck->set_isolation_state(drc, 
> > > SPAPR_DR_ISOLATION_STATE_UNISOLATED);
> > >     }
> > > 
> > > Whereas the default allocation/isolation state for LMBs in spapr_drc.c is
> > > UNUSABLE/ISOLATED, which is what covers the dev->hotplugged == true case.
> > > 
> > > I need to spend some time testing to confirm, but trying to walk through 
> > > the
> > > various scenarios looking at the code:
> > > 
> > > case 1)
> > > 
> > > If the hotplug occurs before reset (not sure how likely this is), the 
> > > event
> > > will get dropped by reset handler, and the DRC stuff will be left in
> > > UNUSABLE/ISOLATED. I think it's more appropriate to treat this as 
> > > "boot-time"
> > > and set it to USABLE/UNISOLATED like the !dev->hotplugged case.
> > 
> > Right.  It looks like we might need to go through all DRCs and sanitize
> > their state at reset time.  Essentially whatever their state before
> > the reset, they should appear as cold-plugged after the reset, I
> > think.
> > 
> > > case 2)
> > > 
> > > If the hotplug it occurs after reset, but before CAS,
> > > spapr_populate_drconf_memory will be called to populate the DT with all 
> > > active
> > > LMBs. AFAICT, for hotplugged LMBs it marks everything where
> > > memory_region_preset(get_system_memory(), addr) == true as
> > > SPAPR_LMB_FLAGS_ASSIGNED. Since the region is mapped regardless of 
> > > whether the
> > > guest has acknowledged the hotplug, I think this would end up presenting 
> > > the
> > > LMB as having been present at boot-time. However, they will still be in 
> > > the
> > > UNUSABLE/ISOLATED state because dev->hotplugged == true.
> > > 
> > > I would think that the delayed hotplug event would move them to the 
> > > appropriate
> > > state later, allowing the unplug to succeed later, but it totally 
> > > possible the
> > > guest code bails out during the hotplug path since it already has the LMB 
> > > marked
> > > as being in use via the CAS-generated DT.
> > > 
> > > So it seems like we need to either:
> > > 
> > > a) not mark these LMBs as SPAPR_LMB_FLAGS_ASSIGNED in the DT and let them 
> > > get
> > > picked up by the deferred hotplug event (which seems to also be in need 
> > > of an
> > > extra IRQ pulse given that it's not getting picked up till later), or
> > > 
> > > b) let them get picked up as boot-time LMBs and add a CAS hook to move the
> > > state to USABLE/UNISOLATED at that point. optionally we could also purge 
> > > any
> > > pending hotplug events from the event queue but that gets weird if we have
> > > subsequent unplug events and whatnot sitting there as well. Hopefully 
> > > letting
> > > guest process the hotplug event later and possible fail still leaves us in
> > > a recoverable state where we can still complete the unplug after boot.
> > > 
> > > Does this seem like an accurate assessment of the issues you're seeing?
> > 
> > It seems plausible from my limited understanding of the situation.
> > The variety of possible state transitions in the PAPR hotplug model
> > hurts my brain.
> > 
> > I think plan (a) sounds simpler than plan (b).  Basically any hotplug
> > events that occur between reset and CAS we want to queue until CAS is
> > complete.  AIUI we're already effectively queuing the event that goes
> > to the guest, but we've already - incorrectly - made some qemu side
> > state changes that show up in the DT fragments handed out by CAS.
> 
> I agree. The one thing I'm a bit iffy on is why the guest is missing
> the interrupt (or handling at least) for the initially-queued events;
> if we go this route we need to make sure the guest acts on them as part
> of boot.
> 
> I assume pending interrupts get dropped by CAS because the guest doesn't
> initialize the hotplug interrupt handler until that point. If that's the
> case, a CAS hook to scan through the event queue to re-signal if needed
> would hopefully do the trick, but I'm still a bit uncertain about whether
> that's sufficient.

Have we confirmed that events are actually being dropped here, it's
not just that once the guest gets to them other state is incorrect
meaning they don't get processed as expected?

> If it's something we can't do deterministically, we might need to consider
> plan (b).
> 
> > 
> > Can we just in general postpone the qemu side updates until the
> > hotplug event is presented to the guest, rather than when it's
> > submitted from the host?  Or will that raise a different bunch of problems?
> 
> It seems like that might be problematic for migration.

Why?  We're now migrating the contents of the event queue...

> Not updating the device tree with LMBs still pending delivery of hotplug
> event during CAS seems fairly easy. We generate a new DT fragment for
> the LMB at hp time anyways, so I think we can safely "throw away" the
> updates and not worry about tracking any additional intermediate state.
> 
> Going to the extent of delaying the call to pc_dimm_memory_plug would be
> problematic though I think. We would need a hook to make sure the call
> is made if CAS completes after migration, and for cases where we do
> migration by re-creating DIMMs via cmdline, we'd need some way to
> synchonize state for these "pending" DIMMs, else that deferred call to
> pc_dimm_memory_plug will probably generate errors due to duplicate
> DIMM mappings (and if we relax/ignore those errors we still have the
> original issue with CAS picking up the LMBs prematurely). This ends up
> seeming really similar to the stuff that necessitated DRC migration,
> which we probably want to avoid if possible.

Hm, ok.  My brain hurts.  Any thoughts on what the next logical step
should be?

-- 
David Gibson                    | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
                                | _way_ _around_!
http://www.ozlabs.org/~dgibson

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]