qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [RFC PATCH v2 01/21] ppc/xive: introduce a skeleton for t


From: Cédric Le Goater
Subject: Re: [Qemu-ppc] [RFC PATCH v2 01/21] ppc/xive: introduce a skeleton for the sPAPR XIVE interrupt controller
Date: Thu, 16 Nov 2017 16:58:53 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0

On 09/26/2017 05:54 AM, David Gibson wrote:
> On Fri, Sep 22, 2017 at 02:42:07PM +0200, Cédric Le Goater wrote:
>> On 09/22/2017 01:00 PM, David Gibson wrote:
>>> On Tue, Sep 19, 2017 at 03:15:44PM +0200, Cédric Le Goater wrote:
>>>> On 09/19/2017 04:27 AM, David Gibson wrote:
>>>>> On Mon, Sep 11, 2017 at 07:12:15PM +0200, Cédric Le Goater wrote:
>>>>>> Start with a couple of attributes for the XIVE sPAPR controller
>>>>>> model. The number of provisionned IRQ is necessary to size the
>>>>>> different internal XIVE tables, the number of CPUs is also.
>>>>>>
>>>>>> Signed-off-by: Cédric Le Goater <address@hidden>
>>>>>
>>>>> [snip]
>>>>>
>>>>>> +static void spapr_xive_realize(DeviceState *dev, Error **errp)
>>>>>> +{
>>>>>> +    sPAPRXive *xive = SPAPR_XIVE(dev);
>>>>>> +
>>>>>> +    if (!xive->nr_targets) {
>>>>>> +        error_setg(errp, "Number of interrupt targets needs to be 
>>>>>> greater 0");
>>>>>> +        return;
>>>>>> +    }
>>>>>> +    /* We need to be able to allocate at least the IPIs */
>>>>>> +    if (!xive->nr_irqs || xive->nr_irqs < xive->nr_targets) {
>>>>>> +        error_setg(errp, "Number of interrupts too small");
>>>>>> +        return;
>>>>>> +    }
>>>>>> +}
>>>>>> +
>>>>>> +static Property spapr_xive_properties[] = {
>>>>>> +    DEFINE_PROP_UINT32("nr-irqs", sPAPRXive, nr_irqs, 0),
>>>>>> +    DEFINE_PROP_UINT32("nr-targets", sPAPRXive, nr_targets, 0),
>>>>>
>>>>> I'm a bit uneasy about the number of targets having to be set in
>>>>> advance: this can make life awkward when CPUs are hotplugged.  I know
>>>>> there's something similar in xics, but it has caused some hassles, and
>>>>> we're starting to move away from it.
>>>>>
>>>>> Do you really need this?
>>>>>
>>>>
>>>> Some of the internal table size depend on the number of cpus 
>>>> defined for the machine.
>>>
>>> Which ones?  My impression was that there needed to be at least #cpus
>>> * #priority-levels EQs, but there could be more than that, 
>>
>> euh no, not in spapr mode at least. There are 8 queues per cpu.
> 
> Ok.
> 
>>> so it was no longer as tightly bound to the number if "interrupt servers"> 
>>> as xics.
>>
>> ah. I think I see what you mean, that we could allocate them on the 
>> fly when needed by some hcalls ?
> 
> Not at hcall time, no, but at cpu hot(un)plug time I was wondering if we
> could (de)allocate them then.

Yes. I am currently reshuffling the ppc/xive patchset on top of the 
sPAPR allocator, also trying to take into account your comments and 
Ben's. And I think we can do that. 

I have introduced a specific XiveICPState (stored under the ->intc
pointer of the CPUState) but I think we can go a little further and
also store the XiveEQ array there. So that all the state related to 
a CPU would be allocated at init time or hot(un)plug time hot time.

We could call the resulting compound object (ICP + EQs) a sPAPRXiveCore 
or something like that. And we won't need to keep 'nr_targets' around 
the sPAPR Xive object anymore *if* we do one more thing. See below. 

>> The other place where I use the nr_targets is to provision the 
>> IRQ numbers for the IPIs but that could probably be done in some 
>> other way, specially it there is a IRQ allocator at the machine
>> level.
> 
> Hm, ok.

The sPAPRXiveCore object above could also allocate the IPI for XIVE 
when it is created. But, the resulting IRQ number should be in a range 
not overlapping with the other devices' IRQs. 

That means that we should probably introduce the BUID concept in the 
IRQ allocator. This concept was left out from the original design of 
XICS as it was using a single ICS but it would be useful to define 
ranges for devices now. for PHBs for instance.

We could start with 2K or 4K ranges. The first one would be for IPIs.

Dave, could you please take a quick look at the allocator patchset 
when you have some time. I don't think there is much to do in the API 
to introduce ranges. It is just a question of defining a 'start' value 
when looking for an empty slot.  

Thanks,

C. 

>>
>> C.  
>>>> When the sPAPRXive object is instantiated, 
>>>> we use xics_max_server_number() to get the max number of cpus
>>>> provisioned.
>>>>
>>>> C.
>>>>
>>>
>>
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]