qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/2] pci-assign: MSI affinity support


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH 0/2] pci-assign: MSI affinity support
Date: Sun, 12 May 2013 14:23:36 +0300

On Fri, May 10, 2013 at 02:40:04PM +0200, Jan Kiszka wrote:
> On 2013-05-09 18:35, Alex Williamson wrote:
> > I posted these about 6 months ago and Jan felt we should implement
> > MSI notifiers like we have for MSI-X.  That still hasn't happened.
> 
> Device assignments are the only currently known users - and you provide
> this feature, so...
> 
> POWER does this nice configuration of MSI messages via a side channel.
> Not that it already fires the MSI-X notifiers properly, but a generic
> notifier based approach is the right way to abstract away the different
> modification channels (instead of encoding them at the consumer side
> like in your patches).
> 
> Moreover, having different designs for MSI and MSI-X is just ugly.
> 
> Jan

I agree, but it's not immediately obvious what would
a good API look like, and I think it's an important bug to fix.
We are sending interrupts to the wrong CPU in a clear
violation of the spec.
And if we drop the tracking of the message per device we
end up with a very small patch - something like the
below - untested, but just to give you the idea.
This hardly looks like a change we need to delay until
we get proper infrastructure in place, right?
It will be just as easy to replace.

pci-assign.c |   21 +++++++++++++++++++++

---

diff --git a/hw/i386/kvm/pci-assign.c b/hw/i386/kvm/pci-assign.c
index c1e08ec..e0061a0 100644
--- a/hw/i386/kvm/pci-assign.c
+++ b/hw/i386/kvm/pci-assign.c
@@ -1026,6 +1026,23 @@ static void assigned_dev_update_msi(PCIDevice *pci_dev)
     }
 }
 
+/* Update MSI message without touching enable/disable bits. */
+static void assigned_dev_update_msi_msg(PCIDevice *pci_dev)
+{
+    AssignedDevice *assigned_dev = DO_UPCAST(AssignedDevice, dev, pci_dev);
+    uint8_t ctrl_byte = pci_get_byte(pci_dev->config + pci_dev->msi_cap +
+                                     PCI_MSI_FLAGS);
+
+    if (assigned_dev->assigned_irq_type != ASSIGNED_IRQ_MSI ||
+        !(ctrl_byte & PCI_MSI_FLAGS_ENABLE)) {
+        return;
+    }
+
+    assert(assigned_dev->msi_virq_nr == 1);
+    kvm_irqchip_update_msi_route(kvm_state, assigned_dev->msi_virq[0],
+                                 msi_get_message(pci_dev, 0));
+}
+
 static bool assigned_dev_msix_masked(MSIXTableEntry *entry)
 {
     return (entry->ctrl & cpu_to_le32(0x1)) != 0;
@@ -1201,6 +1218,10 @@ static void assigned_dev_pci_write_config(PCIDevice 
*pci_dev, uint32_t address,
         if (range_covers_byte(address, len,
                               pci_dev->msi_cap + PCI_MSI_FLAGS)) {
             assigned_dev_update_msi(pci_dev);
+        } else if (ranges_overlap(address, len,
+                                  pci_dev->msi_cap + PCI_MSI_ADDRESS_LO,
+                                  PCI_MSI_DATA_32 + 2 - PCI_MSI_ADDRESS_LO)) {
+            assigned_dev_update_msi_msg(pci_dev);
         }
     }
     if (assigned_dev->cap.available & ASSIGNED_DEVICE_CAP_MSIX) {



reply via email to

[Prev in Thread] Current Thread [Next in Thread]