qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 05/13] pci: Add pci_device_route_intx_to_irq


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH 05/13] pci: Add pci_device_route_intx_to_irq
Date: Sun, 10 Jun 2012 12:55:08 +0300

On Thu, Jun 07, 2012 at 06:46:38PM +0200, Jan Kiszka wrote:
> On 2012-06-07 18:28, Michael S. Tsirkin wrote:
> > On Thu, Jun 07, 2012 at 05:10:17PM +0200, Jan Kiszka wrote:
> >> On 2012-06-07 16:32, Michael S. Tsirkin wrote:
> >>> On Mon, Jun 04, 2012 at 10:52:13AM +0200, Jan Kiszka wrote:
> >>>> @@ -1089,6 +1093,14 @@ static void pci_set_irq(void *opaque, int 
> >>>> irq_num, int level)
> >>>>      pci_change_irq_level(pci_dev, irq_num, change);
> >>>>  }
> >>>>  
> >>>> +PCIINTxRoute pci_device_route_intx_to_irq(PCIDevice *dev, int pin)
> >>>> +{
> >>>> +    PCIBus *bus = dev->host_bus;
> >>>> +
> >>>> +    assert(bus->route_intx_to_irq);
> >>>> +    return bus->route_intx_to_irq(bus->irq_opaque, 
> >>>> dev->host_intx_pin[pin]);
> >>>> +}
> >>>> +
> >>>>  /***********************************************************/
> >>>>  /* monitor info on PCI */
> >>>>  
> >>>
> >>> Just an idea: can devices cache this result, bypassing the
> >>> intx to irq lookup on data path?
> >>
> >> That lookup is part of set_irq which we don't bypass so far and where
> >> this is generally trivial. If we want to cache the effects of set_irq as
> >> well, I guess things would become pretty complex (e.g. due to vmstate
> >> compatibility), and I'm unsure if it would buy us much.
> > 
> > This is less for performance but more for making
> > everyone use the same infrastructure rather than
> > assigned devices being the weird case.
> 
> Device assignment is weird. It bypasses all state updates as it does not
> have to bother about migratability.
> 
> Well, of course we could cache the host bridge routing result as well,
> for every device. It would have to be in addition to host_intx_pin. But
> the result would look pretty strange to me.
> 
> In any case, I would prefer to do this, if at all, on top of this
> series, specifically as it will require to touch all host bridges.

I'd like to ponder this a bit more then.

If the claim is that device assignment is only needed for
piix anyway, then why not make it depend on piix *explicitly*?
Yes ugly but this will make it very easy to find and
address any missing pieces.

As it is you are adding APIs that in theory address
non PIIX issues but in practice don't implement for non
PIIX. So we never really know.

> > 
> >>>
> >>>> diff --git a/hw/pci.h b/hw/pci.h
> >>>> index 5b54e2d..bbba01e 100644
> >>>> --- a/hw/pci.h
> >>>> +++ b/hw/pci.h
> >>>> @@ -141,6 +141,15 @@ enum {
> >>>>  #define PCI_DEVICE_GET_CLASS(obj) \
> >>>>       OBJECT_GET_CLASS(PCIDeviceClass, (obj), TYPE_PCI_DEVICE)
> >>>>  
> >>>> +typedef struct PCIINTxRoute {
> >>>> +    enum {
> >>>> +        PCI_INTX_ENABLED,
> >>>> +        PCI_INTX_INVERTED,
> >>>> +        PCI_INTX_DISABLED,
> >>>> +    } mode;
> >>>> +    int irq;
> >>>> +} PCIINTxRoute;
> >>>
> >>> Is this INTX route or IRQ route?
> >>> Is the INTX enabled/disabled/inverted or the IRQ?
> >>>
> >>> I have the impression it's the IRQ, in the apic.
> >>> PCI INTX are never inverted they are always active low.
> >>
> >> This should be considered as "the route *of* an INTx", not "to some
> >> IRQ". I could call it PCIINTxToIRQRoute if you prefer, but it's a bit
> >> lengthy.
> >>
> >> Jan
> > 
> > Yes but the polarity is in apic? Or is it in host bridge?
> > 
> 
> Nope (then we would not have to bother). At least one host bridge
> (bonito) is apparently able to invert the polarity.
> 
> Jan
> 





reply via email to

[Prev in Thread] Current Thread [Next in Thread]