qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 06/13] pci: Add INTx routing notifier


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH 06/13] pci: Add INTx routing notifier
Date: Sun, 10 Jun 2012 16:19:37 +0300

On Sun, Jun 10, 2012 at 02:47:40PM +0200, Jan Kiszka wrote:
> On 2012-06-10 14:42, Michael S. Tsirkin wrote:
> > On Sun, Jun 10, 2012 at 02:33:03PM +0200, Jan Kiszka wrote:
> >> On 2012-06-10 14:16, Michael S. Tsirkin wrote:
> >>> On Sun, Jun 10, 2012 at 02:09:20PM +0200, Jan Kiszka wrote:
> >>>> On 2012-06-10 13:39, Michael S. Tsirkin wrote:
> >>>>> It's OK to use recursion but when done through a callback
> >>>>> like this it's unreadable.
> >>>>
> >>>> Isn't the alternative poking into foreign bridge device states for their
> >>>> secondary buses?
> >>>
> >>> pci_set_bus_intx_routing does this already.
> >>
> >> True. OK, I can do the recursion in pci_bus_fire_intx_routing_notifier
> >> directly instead of pushing this into the bridge.
> >>
> >>>
> >>>>> Also, you need to setup you cache after intx cache has been
> >>>>> initialized, and you provide no clean way to do that.
> >>>>
> >>>> Once a PCI device is registered, the INTx route can be queried. So the
> >>>> device user will call pci_device_route_intx_to_irq once (e.g. in the
> >>>> device init function which is invoked afterward) to fill its cache and
> >>>> receive a notification if an update is needed. I do not see why, and
> >>>> specifically how you could query the route earlier or register a 
> >>>> callback.
> >>>
> >>> Before pci_bus_irqs is called.
> >>> Why is another question.
> >>>
> >>>>>
> >>>>> One way to fix all this is call the notifier for devices, if set, from
> >>>>> pci_set_bus_intx_routing.
> >>>>> Then assume that intx to irq translations can be cached
> >>>>> even though they aren't now. So you will need to invoke
> >>>>> pci_set_bus_intx_routing on intx to irq mapping changes,
> >>>>> and that fires the notifier for free.
> >>>>
> >>>> pci_set_bus_intx_routing is really only for the initial setup of the
> >>>> static INTx pin routes. And this happens on
> >>>> pci_bus_irqs/pci_register_bus, ie. triggered by the host bridge. By that
> >>>> time, there can't be any notifier listeners - as there are no devices 
> >>>> yet.
> >>>>
> >>>> Jan
> >>>>
> >>>
> >>> What I am saying is we'll cache the final IRQ at some point.
> >>> Pretend it's already that way so callers are ready for this.
> >>
> >> This wouldn't change the picture very much: Before the host bridge is
> >> fully initialized, there is no valid route available. But before that,
> >> there is also no device attached to it. So the invocation pattern
> >> wouldn't change.
> >>
> >> What would change is the semantic of the interface to the host bridge.
> >> So what about this: provide a public pci_root_bus_intx_routing_updated
> >> which so far just calls the internal-use-only
> >> pci_bus_fire_intx_routing_notifier?
> >>
> >> Jan
> >>
> > 
> > I think a better name is pci_bus_update_intx_irq_cache
> > or something like that.
> 
> At least "update cache" is not a better description as it implies in the
> function name the required steps of the implementation.

That's only reasonable: saying what a function does
limits the implementation at some level.
If you don't you end up with no way to understand it
or check for correctness unless you find and read all
callers as well.

> In fact, this
> function /may/ update a cache and will fire notifiers.

notifiers *if any*. And the only reason to have a notifier
is if you cache the routing.

> But that's nothing the care should worry about.

IMO function name should say what it does, not
how it does it (fire notifiers) or when
it should be called (routing updated).

> > 
> > And I really think it's better to recalculate the
> > intx routing there as well, so that if some bus
> > implements a dynamic route_intx it just needs to
> > call this after update.
> 
> I thought dynamic routing is disallowed according to the spec? If not, I
> agree.
> 
> Jan
> 

I think it is in pci to pci bridges, but might not be in pci host
bridges. It's not a hard requirement though, but nice to have.

-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]