qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 06/13] pci: Add INTx routing notifier


From: Jan Kiszka
Subject: Re: [Qemu-devel] [PATCH 06/13] pci: Add INTx routing notifier
Date: Sun, 10 Jun 2012 14:47:40 +0200
User-agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); de; rv:1.8.1.12) Gecko/20080226 SUSE/2.0.0.12-1.1 Thunderbird/2.0.0.12 Mnenhy/0.7.5.666

On 2012-06-10 14:42, Michael S. Tsirkin wrote:
> On Sun, Jun 10, 2012 at 02:33:03PM +0200, Jan Kiszka wrote:
>> On 2012-06-10 14:16, Michael S. Tsirkin wrote:
>>> On Sun, Jun 10, 2012 at 02:09:20PM +0200, Jan Kiszka wrote:
>>>> On 2012-06-10 13:39, Michael S. Tsirkin wrote:
>>>>> It's OK to use recursion but when done through a callback
>>>>> like this it's unreadable.
>>>>
>>>> Isn't the alternative poking into foreign bridge device states for their
>>>> secondary buses?
>>>
>>> pci_set_bus_intx_routing does this already.
>>
>> True. OK, I can do the recursion in pci_bus_fire_intx_routing_notifier
>> directly instead of pushing this into the bridge.
>>
>>>
>>>>> Also, you need to setup you cache after intx cache has been
>>>>> initialized, and you provide no clean way to do that.
>>>>
>>>> Once a PCI device is registered, the INTx route can be queried. So the
>>>> device user will call pci_device_route_intx_to_irq once (e.g. in the
>>>> device init function which is invoked afterward) to fill its cache and
>>>> receive a notification if an update is needed. I do not see why, and
>>>> specifically how you could query the route earlier or register a callback.
>>>
>>> Before pci_bus_irqs is called.
>>> Why is another question.
>>>
>>>>>
>>>>> One way to fix all this is call the notifier for devices, if set, from
>>>>> pci_set_bus_intx_routing.
>>>>> Then assume that intx to irq translations can be cached
>>>>> even though they aren't now. So you will need to invoke
>>>>> pci_set_bus_intx_routing on intx to irq mapping changes,
>>>>> and that fires the notifier for free.
>>>>
>>>> pci_set_bus_intx_routing is really only for the initial setup of the
>>>> static INTx pin routes. And this happens on
>>>> pci_bus_irqs/pci_register_bus, ie. triggered by the host bridge. By that
>>>> time, there can't be any notifier listeners - as there are no devices yet.
>>>>
>>>> Jan
>>>>
>>>
>>> What I am saying is we'll cache the final IRQ at some point.
>>> Pretend it's already that way so callers are ready for this.
>>
>> This wouldn't change the picture very much: Before the host bridge is
>> fully initialized, there is no valid route available. But before that,
>> there is also no device attached to it. So the invocation pattern
>> wouldn't change.
>>
>> What would change is the semantic of the interface to the host bridge.
>> So what about this: provide a public pci_root_bus_intx_routing_updated
>> which so far just calls the internal-use-only
>> pci_bus_fire_intx_routing_notifier?
>>
>> Jan
>>
> 
> I think a better name is pci_bus_update_intx_irq_cache
> or something like that.

At least "update cache" is not a better description as it implies in the
function name the required steps of the implementation. In fact, this
function /may/ update a cache and will fire notifiers. But that's
nothing the care should worry about.

> 
> And I really think it's better to recalculate the
> intx routing there as well, so that if some bus
> implements a dynamic route_intx it just needs to
> call this after update.

I thought dynamic routing is disallowed according to the spec? If not, I
agree.

Jan

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]