qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v1 06/13] spapr: CPU hotplug support


From: Bharata B Rao
Subject: Re: [Qemu-devel] [RFC PATCH v1 06/13] spapr: CPU hotplug support
Date: Fri, 30 Jan 2015 12:29:03 +0530
User-agent: Mutt/1.5.23 (2014-03-12)

On Fri, Jan 23, 2015 at 01:41:38PM +0100, Igor Mammedov wrote:
> On Thu,  8 Jan 2015 11:40:13 +0530
> Bharata B Rao <address@hidden> wrote:
> 
> > Support CPU hotplug via device-add command. Use the exising EPOW event
> > infrastructure to send CPU hotplug notification to the guest.
> > 
> > Signed-off-by: Bharata B Rao <address@hidden>
> > ---
> >  hw/ppc/spapr.c              | 205 
> > +++++++++++++++++++++++++++++++++++++++++++-
> >  hw/ppc/spapr_events.c       |   8 +-
> >  target-ppc/translate_init.c |   6 ++
> >  3 files changed, 215 insertions(+), 4 deletions(-)
> > 
> > diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> > index 515d770..a293a59 100644
> > --- a/hw/ppc/spapr.c
> > +++ b/hw/ppc/spapr.c
> > @@ -330,6 +330,8 @@ static void add_str(GString *s, const gchar *s1)
> >      g_string_append_len(s, s1, strlen(s1) + 1);
> >  }
> >  
> > +uint32_t cpus_per_socket;
> static ???

Sure.

> > +
> > +static void spapr_cpu_plug(HotplugHandler *hotplug_dev, DeviceState *dev,
> > +                            Error **errp)
> > +{
> > +    Error *local_err = NULL;
> > +    CPUState *cs = CPU(dev);
> > +    PowerPCCPU *cpu = POWERPC_CPU(cs);
> > +    sPAPRDRConnector *drc =
> > +        spapr_dr_connector_by_id(SPAPR_DR_CONNECTOR_TYPE_CPU, 
> > cpu->cpu_dt_id);
> just rant: does this have any relation to hotplug_dev, the point here is to 
> get
> these data from hotplug_dev object/some child of it rather then via direct 
> adhoc call.

I see how hotplug_dev is being used to pass on the plug request to ACPI, but
have to check how hotplug_dev can be used more meaningfully here.

> 
> > +
> > +    /* TODO: Check if DR is enabled ? */
> > +    g_assert(drc);
> > +
> > +    spapr_cpu_reset(POWERPC_CPU(CPU(dev)));
> reset probably should be don at realize time, see x86_cpu_realizefn() for 
> example.

Yes, can be done.

> 
> > +    spapr_cpu_hotplug_add(dev, cs);
> > +    spapr_hotplug_req_add_event(drc);
> > +    error_propagate(errp, local_err);
> > +    return;
> > +}
> > +
> > +static void spapr_machine_device_plug(HotplugHandler *hotplug_dev,
> > +                                      DeviceState *dev, Error **errp)
> > +{
> > +    if (object_dynamic_cast(OBJECT(dev), TYPE_CPU)) {
> > +        if (dev->hotplugged) {
> > +            spapr_cpu_plug(hotplug_dev, dev, errp);
> Would be nicer if this could do cold-plugged CPUs wiring too.

Yes, will check and see how intrusive change that would be.

> > diff --git a/target-ppc/translate_init.c b/target-ppc/translate_init.c
> > index 9c642a5..cf9d8d3 100644
> > --- a/target-ppc/translate_init.c
> > +++ b/target-ppc/translate_init.c
> > @@ -32,6 +32,7 @@
> >  #include "hw/qdev-properties.h"
> >  #include "hw/ppc/spapr.h"
> >  #include "hw/ppc/ppc.h"
> > +#include "sysemu/sysemu.h"
> >  
> >  //#define PPC_DUMP_CPU
> >  //#define PPC_DEBUG_SPR
> > @@ -8909,6 +8910,11 @@ static void ppc_cpu_realizefn(DeviceState *dev, 
> > Error **errp)
> >          return;
> >      }
> >  
> > +    if (cs->cpu_index >= max_cpus) {
> pls note that cpu_index is monotonically increases, so when you do unplug
> and then plug it will go above max_cpus or the same will happen if
> one device_add fails in the middle, the next CPU add will fail because of
> cs->cpu_index goes overboard.
> 
> I'd suggest not to rely/use cpu_index for any purposes and use other means
> to identify where cpu is plugged in. On x68 we slowly getting rid of this
> dependency in favor of apic_id (topology information), eventually it could
> become:
>   -device cpu_foo,socket=X,core=Y[,thread=Z][,node=N]
> 
> you probably could do the same.
> It doesn't have to be in this series, just be aware of potential issues.

I see your point and this needs to be fixed as I see this causing problems
with CPU removal (from the middle) and subsequent addition (which makes
use of "vcpu fd parking and reuse" mechanism).

Thanks for your review.

Regards,
Bharata.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]