qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v4 2/9] spapr: move the IRQ server number mappin


From: Cedric Le Goater
Subject: Re: [Qemu-devel] [PATCH v4 2/9] spapr: move the IRQ server number mapping under the machine
Date: Thu, 30 Mar 2017 17:04:38 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0

On 03/30/2017 03:04 PM, Cédric Le Goater wrote:
> On 03/30/2017 08:46 AM, David Gibson wrote:
>> On Wed, Mar 29, 2017 at 03:53:24PM +0200, Cédric Le Goater wrote:
>>> This is the second step to abstract the IRQ 'server' number of the
>>> XICS layer. Now that the prereq cleanups have been done in the
>>> previous patch, we can move down the 'cpu_dt_id' to 'cpu_index'
>>> mapping in the sPAPR machine handler.
>>>
>>> Signed-off-by: Cédric Le Goater <address@hidden>
>>> Reviewed-by: David Gibson <address@hidden>
>>> ---
>>>  hw/intc/xics_spapr.c    | 5 ++---
>>>  hw/ppc/spapr.c          | 3 ++-
>>>  hw/ppc/spapr_cpu_core.c | 2 +-
>>>  3 files changed, 5 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/hw/intc/xics_spapr.c b/hw/intc/xics_spapr.c
>>> index 58f100d379cb..f05308b897f2 100644
>>> --- a/hw/intc/xics_spapr.c
>>> +++ b/hw/intc/xics_spapr.c
>>> @@ -52,9 +52,8 @@ static target_ulong h_cppr(PowerPCCPU *cpu, 
>>> sPAPRMachineState *spapr,
>>>  static target_ulong h_ipi(PowerPCCPU *cpu, sPAPRMachineState *spapr,
>>>                            target_ulong opcode, target_ulong *args)
>>>  {
>>> -    target_ulong server = xics_get_cpu_index_by_dt_id(args[0]);
>>>      target_ulong mfrr = args[1];
>>> -    ICPState *icp = xics_icp_get(XICS_FABRIC(spapr), server);
>>> +    ICPState *icp = xics_icp_get(XICS_FABRIC(spapr), args[0]);
>>>  
>>>      if (!icp) {
>>>          return H_PARAMETER;
>>> @@ -122,7 +121,7 @@ static void rtas_set_xive(PowerPCCPU *cpu, 
>>> sPAPRMachineState *spapr,
>>>      }
>>>  
>>>      nr = rtas_ld(args, 0);
>>> -    server = xics_get_cpu_index_by_dt_id(rtas_ld(args, 1));
>>> +    server = rtas_ld(args, 1);
>>>      priority = rtas_ld(args, 2);
>>>  
>>>      if (!ics_valid_irq(ics, nr) || !xics_icp_get(XICS_FABRIC(spapr), 
>>> server)
>>> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
>>> index 8aecea3dd10c..b9f7f8607869 100644
>>> --- a/hw/ppc/spapr.c
>>> +++ b/hw/ppc/spapr.c
>>> @@ -3024,9 +3024,10 @@ static void spapr_ics_resend(XICSFabric *dev)
>>>      ics_resend(spapr->ics);
>>>  }
>>>  
>>> -static ICPState *spapr_icp_get(XICSFabric *xi, int server)
>>> +static ICPState *spapr_icp_get(XICSFabric *xi, int cpu_dt_id)
>>>  {
>>>      sPAPRMachineState *spapr = SPAPR_MACHINE(xi);
>>> +    int server = xics_get_cpu_index_by_dt_id(cpu_dt_id);
>>
>> The idea is good, but this is a bad name (as it was in the original
>> version, too).  The whole point here is that the XICS server number
>> (as it appears in the ICS registers) is the input to this function,
>> and we no longer assume that is equal to cpu_index.
>>
>> Seems we could just get the cpu object by dt_id here, then go to 
>> ICP(cpu->intc).
> 
> yes that would be nice and this is exactly what pnv does now, but 
> this is only possible because the PnvICPState objects are allocated 
> from under PnvCore. This is not the case for sPAPR yet.
> 
> Currently, when the sPAPR core are realized, we call spapr_cpu_init() 
> which first gets its ICP with :
> 
>       xics_icp_get(xi, cpu->cpu_dt_id);
> 
> so we cannot use ICP(cpu->intc) in this XICSFabric handler, it is not
> assigned yet. It only will be at the end of spapr_cpu_init() when 
> 
>       xics_cpu_setup(xi, cpu, icp);
> 
> is called. 
> 
> 
> As suggested in an email this morning, I think we could allocate 
> the ICP from under the sPAPR core if we knew which ICP type to use 
> (KVM or not).  
> 
> For that, we could use a new XICSFabric handler :
> 
>       const char *icp_type(XICSFabric *xi)
> 
> or a machine attribute to get the type. The ICP type would be chosen 
> in xics_system_init() a bit like it is done today but we would not 
> create the ICP object. 
> 
> or we could use a machine helper (probably better) :  
> 
>       ICPState *spapr_icp_create(MachineState *machine);   
> 
> which would only do the ICP part of xics_system_init(). The ICS
> object can be created later on, it is not a problem. 
> 
> We have kind of a chicken and egg problem with the Core and the 
> ICP objects today that it would be nice to untangle. 
> 
> Suggestions ?

I have cooked a patch for this idea. It applies correctly in the 
v4 patchset between 'PATCH v4 2/9' and 'PATCH v4 3/9'. I will 
send as a follow up of 'PATCH v4 2/9'. 

Thanks,

C.  


> 
> C. 
> 
> 
>>
>>>      return (server < spapr->nr_servers) ? &spapr->icps[server] : NULL;
>>>  }
>>> diff --git a/hw/ppc/spapr_cpu_core.c b/hw/ppc/spapr_cpu_core.c
>>> index 7db61bd72476..4e1a99591d19 100644
>>> --- a/hw/ppc/spapr_cpu_core.c
>>> +++ b/hw/ppc/spapr_cpu_core.c
>>> @@ -64,7 +64,7 @@ static void spapr_cpu_init(sPAPRMachineState *spapr, 
>>> PowerPCCPU *cpu,
>>>  {
>>>      CPUPPCState *env = &cpu->env;
>>>      XICSFabric *xi = XICS_FABRIC(spapr);
>>> -    ICPState *icp = xics_icp_get(xi, CPU(cpu)->cpu_index);
>>> +    ICPState *icp = xics_icp_get(xi, cpu->cpu_dt_id);
>>>  
>>>      /* Set time-base frequency to 512 MHz */
>>>      cpu_ppc_tb_init(env, SPAPR_TIMEBASE_FREQ);
>>
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]