qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [v4 1/6] hw/msi: Allow platform devices to use explicit


From: David Kiarie
Subject: Re: [Qemu-devel] [v4 1/6] hw/msi: Allow platform devices to use explicit SID
Date: Mon, 12 Sep 2016 14:26:14 +0300

On Mon, Sep 12, 2016 at 2:02 PM, Peter Xu <address@hidden> wrote:

> On Mon, Sep 12, 2016 at 01:08:04PM +0300, David Kiarie wrote:
> > When using IOMMU platform devices like IOAPIC are required to make
> > interrupt remapping requests using explicit SID.We affiliate an MSI
> > route with a requester ID and a PCI device if present which ensures
> > that platform devices can call IOMMU interrupt remapping code with
> > explicit SID while maintaining compatility with the original code
> > which mainly dealt with PCI devices.
>
> Nit: IMHO it'll be cooler and easier to understand if we can avoid
> using very long sentences like above...
>
> [...]
>
> > diff --git a/hw/intc/ioapic.c b/hw/intc/ioapic.c
> > index 31791b0..f2d4c15 100644
> > --- a/hw/intc/ioapic.c
> > +++ b/hw/intc/ioapic.c
> > @@ -95,9 +95,17 @@ static void ioapic_entry_parse(uint64_t entry, struct
> ioapic_entry_info *info)
> >          (info->delivery_mode << MSI_DATA_DELIVERY_MODE_SHIFT);
> >  }
> >
> > -static void ioapic_service(IOAPICCommonState *s)
> > +static void ioapic_as_write(IOAPICCommonState *s, uint32_t data,
> uint64_t addr)
> >  {
> >      AddressSpace *ioapic_as = PC_MACHINE(qdev_get_machine())
> ->ioapic_as;
> > +    MemTxAttrs attrs;
>
> Need to zero it first?
>
> [...]
>
> > @@ -383,14 +391,28 @@ static void ioapic_machine_done_notify(Notifier
> *notifier, void *data)
> >      IOAPICCommonState *s = container_of(notifier, IOAPICCommonState,
> >                                          machine_done);
> >
> > +    X86IOMMUState *iommu = x86_iommu_get_default();
> > +    /* kernel_irqchip=off */
> > +    if (iommu) {
> > +        s->devid = iommu->ioapic_bdf;
> > +    }
>
> Move out of "#ifdef CONFIG_KVM"?
>

Right, will move this out.


>
> [...]
>
> > @@ -407,7 +429,6 @@ static void ioapic_realize(DeviceState *dev, Error
> **errp)
> >
> >      memory_region_init_io(&s->io_memory, OBJECT(s), &ioapic_io_ops, s,
> >                            "ioapic", 0x1000);
> > -
>
> Useless change.
>
> [...]
>
> > -int kvm_irqchip_add_msi_route(KVMState *s, int vector, PCIDevice *dev)
> > +int kvm_irqchip_add_msi_route(KVMState *s, int vector, PCIDevice *dev,
> uint16_t requester_id)
> >  {
> >      struct kvm_irq_routing_entry kroute = {};
> >      int virq;
> > @@ -1275,7 +1275,8 @@ int kvm_irqchip_add_msi_route(KVMState *s, int
> vector, PCIDevice *dev)
> >      kroute.u.msi.address_lo = (uint32_t)msg.address;
> >      kroute.u.msi.address_hi = msg.address >> 32;
> >      kroute.u.msi.data = le32_to_cpu(msg.data);
> > -    if (kvm_arch_fixup_msi_route(&kroute, msg.address, msg.data, dev))
> {
> > +    if (kvm_arch_fixup_msi_route(&kroute, msg.address, msg.data,
> > +                                 requester_id)) {
>
> Can we just remove the PCIDevice parameter directly? I didn't go
> deeper, but it seems to be explicitly introduced here:
>
>     commit dc9f06ca81e6e16d062ec382701142a3a2ab3f7d
>     Author: Pavel Fedin <address@hidden>
>     Date:   Thu Oct 15 16:44:52 2015 +0300
>
>         kvm: Pass PCI device pointer to MSI routing functions
>
> And we'd better make sure we really wanted to remove it.
>
> Also, I think we need to modify all target-*/kvm.c for this interface
> change?
>

The current code keeps track of PCI devices for purposes of easily
constructing MSI messages

e.g in kvm_update_msi_routes_all which means to get rid of PCI device
pointer means we need another way to keep track of PCI devices.


>
> [...]
>
> Btw, could you explain a bit about what kind of tests have you carried
> out for this series (maybe also the AMD IOMMU general one)?
>

The tests done are pretty basic and manual. For DMA the related device
addresses are pretty static which means I can spot incorrect translations
in the logs. IR has also mainly been tested by watching the logs.


> Thanks,
>
> -- peterx
>


reply via email to

[Prev in Thread] Current Thread [Next in Thread]