qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-ppc] [PATCH v9 4/6] hw/ppc/spapr.c: migrate pendi


From: Michael Roth
Subject: Re: [Qemu-devel] [Qemu-ppc] [PATCH v9 4/6] hw/ppc/spapr.c: migrate pending_dimm_unplugs of spapr state
Date: Mon, 15 May 2017 22:02:28 -0500
User-agent: alot/0.5.1

Quoting Daniel Henrique Barboza (2017-05-12 14:54:57)
> 
> 
> On 05/12/2017 03:12 AM, David Gibson wrote:
> > On Fri, May 05, 2017 at 05:47:44PM -0300, Daniel Henrique Barboza wrote:
> >> To allow for a DIMM unplug event to resume its work if a migration
> >> occurs in the middle of it, this patch migrates the non-empty
> >> pending_dimm_unplugs QTAILQ that stores the DIMM information
> >> that the spapr_lmb_release() callback uses.
> >>
> >> It was considered an apprach where the DIMM states would be restored
> >> on the post-_load after a migration. The problem is that there is
> >> no way of knowing, from the sPAPRMachineState, if a given DIMM is going
> >> through an unplug process and the callback needs the updated DIMM State.
> >>
> >> We could migrate a flag indicating that there is an unplug event going
> >> on for a certain DIMM, fetching this information from the start of the
> >> spapr_del_lmbs call. But this would also require a scan on post_load to
> >> figure out how many nr_lmbs are left. At this point we can just
> >> migrate the nr_lmbs information as well, given that it is being calculated
> >> at spapr_del_lmbs already, and spare a scanning/discovery in the
> >> post-load. All that we need is inside the sPAPRDIMMState structure
> >> that is added to the pending_dimm_unplugs queue at the start of the
> >> spapr_del_lmbs, so it's convenient to just migrated this queue it if it's
> >> not empty.
> >>
> >> Signed-off-by: Daniel Henrique Barboza <address@hidden>
> > NACK.
> >
> > As I believe I suggested previously, you can reconstruct this state on
> > the receiving side by doing a full scan of the DIMM and LMB DRC states.
> 
> Just had an idea that I think it's in the line of what you're 
> suggesting. Given
> that the information we need is only created in the spapr_del_lmbs
> (as per patch 1), we can use the absence of this information in the
> release callback as a sort of a flag, an indication that a migration got
> in the way and we need to reconstruct the nr_lmbs states again, using
> the same scanning function I've used in v8.
> 
> The flow would be like this (considering the changes in the
> previous 3 patches so far):
> 
> ------------
> 
> /* Callback to be called during DRC release. */
> void spapr_lmb_release(DeviceState *dev)
> {
>       HotplugHandler *hotplug_ctrl;
> 
>       uint64_t addr = spapr_dimm_get_address(PC_DIMM(dev));
>       sPAPRMachineState *spapr = SPAPR_MACHINE(qdev_get_machine());
>       sPAPRDIMMState *ds = spapr_pending_dimm_unplugs_find(spapr, addr);
> 
>      // no DIMM state found in spapr - re-create it to find out how may 
> LMBs are left
>      if (ds == NULL) {
>          uint32 nr_lmbs  = ***call_scanning_LMB_DRCs_function(dev)***
>          // recreate the sPAPRDIMMState element and add it back to spapr
>      }
> 
>      ( resume callback as usual )
> 
> -----------
> 
> Is this approach be adequate? Another alternative would be to use another

Seems reasonable to me.

> way of detecting if an LMB unplug is happening and, if positive, do the same
> process in the post_load(). In this case I'll need to take a look in the 
> code and
> see how we can detect an ongoing unplug besides what I've said above.

You could scan DRCs/LMBs for each DIMM and generate an sPAPRDIMMState for any
case where DRCs/LMBs are not all present. But doing it the way you've
proposed above lets us re-generate them as needed and avoid the exhaustive
scan. I'd be sure to document it in the comments as a migration-specific hook
though since you lose that understanding by moving it out of postload and
future refactorings might forget why we have a hook that regenerates it
even though spapr_del_lmbs always sets it initially.

> 
> 
> Thanks,
> 
> 
> Daniel
> 
> >
> >> ---
> >>   hw/ppc/spapr.c | 31 +++++++++++++++++++++++++++++++
> >>   1 file changed, 31 insertions(+)
> >>
> >> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> >> index e190eb9..30f0b7b 100644
> >> --- a/hw/ppc/spapr.c
> >> +++ b/hw/ppc/spapr.c
> >> @@ -1437,6 +1437,36 @@ static bool version_before_3(void *opaque, int 
> >> version_id)
> >>       return version_id < 3;
> >>   }
> >>   
> >> +static bool spapr_pending_dimm_unplugs_needed(void *opaque)
> >> +{
> >> +    sPAPRMachineState *spapr = (sPAPRMachineState *)opaque;
> >> +    return !QTAILQ_EMPTY(&spapr->pending_dimm_unplugs);
> >> +}
> >> +
> >> +static const VMStateDescription vmstate_spapr_dimmstate = {
> >> +    .name = "spapr_dimm_state",
> >> +    .version_id = 1,
> >> +    .minimum_version_id = 1,
> >> +    .fields = (VMStateField[]) {
> >> +        VMSTATE_UINT64(addr, sPAPRDIMMState),
> >> +        VMSTATE_UINT32(nr_lmbs, sPAPRDIMMState),
> >> +        VMSTATE_END_OF_LIST()
> >> +    },
> >> +};
> >> +
> >> +static const VMStateDescription vmstate_spapr_pending_dimm_unplugs = {
> >> +    .name = "spapr_pending_dimm_unplugs",
> >> +    .version_id = 1,
> >> +    .minimum_version_id = 1,
> >> +    .needed = spapr_pending_dimm_unplugs_needed,
> >> +    .fields = (VMStateField[]) {
> >> +        VMSTATE_QTAILQ_V(pending_dimm_unplugs, sPAPRMachineState, 1,
> >> +                         vmstate_spapr_dimmstate, sPAPRDIMMState,
> >> +                         next),
> >> +        VMSTATE_END_OF_LIST()
> >> +    },
> >> +};
> >> +
> >>   static bool spapr_ov5_cas_needed(void *opaque)
> >>   {
> >>       sPAPRMachineState *spapr = opaque;
> >> @@ -1535,6 +1565,7 @@ static const VMStateDescription vmstate_spapr = {
> >>       .subsections = (const VMStateDescription*[]) {
> >>           &vmstate_spapr_ov5_cas,
> >>           &vmstate_spapr_patb_entry,
> >> +        &vmstate_spapr_pending_dimm_unplugs,
> >>           NULL
> >>       }
> >>   };
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]