[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[RFC PATCH v4 41/47] hw/xen: Support HVM_PARAM_CALLBACK_TYPE_PCI_INTX ca
From: |
David Woodhouse |
Subject: |
[RFC PATCH v4 41/47] hw/xen: Support HVM_PARAM_CALLBACK_TYPE_PCI_INTX callback |
Date: |
Wed, 21 Dec 2022 01:06:17 +0000 |
From: David Woodhouse <dwmw@amazon.co.uk>
The guest is permitted to specify an arbitrary domain/bus/device/function
and INTX pin from which the callback IRQ shall appear to have come.
In QEMU we can only easily do this for devices that actually exist, and
even that requires us "knowing" that it's a PCMachine in order to find
the PCI root bus — although that's OK really because it's always true.
We also don't get to get notified of INTX routing changes, because we
can't do that as a passive observer; if we try to register a notifier
it will overwrite any existing notifier callback on the device.
But in practice, guests using PCI_INTX will only ever use pin A on the
Xen platform device, and won't swizzle the INTX routing after they set
it up. So this is just fine.
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
hw/i386/kvm/xen_evtchn.c | 69 ++++++++++++++++++++++++++++++++++------
1 file changed, 60 insertions(+), 9 deletions(-)
diff --git a/hw/i386/kvm/xen_evtchn.c b/hw/i386/kvm/xen_evtchn.c
index 255795b6e2..6f9fa78c69 100644
--- a/hw/i386/kvm/xen_evtchn.c
+++ b/hw/i386/kvm/xen_evtchn.c
@@ -25,6 +25,8 @@
#include "hw/sysbus.h"
#include "hw/xen/xen.h"
#include "hw/i386/x86.h"
+#include "hw/i386/pc.h"
+#include "hw/pci/pci.h"
#include "hw/irq.h"
#include "xen_evtchn.h"
@@ -100,6 +102,7 @@ struct XenEvtchnState {
uint64_t callback_param;
bool evtchn_in_kernel;
+ uint32_t callback_gsi;
QemuMutex port_lock;
uint32_t nr_ports;
@@ -205,16 +208,42 @@ static void xen_evtchn_register_types(void)
type_init(xen_evtchn_register_types)
-static void xen_evtchn_set_callback_level(XenEvtchnState *s, int level)
+static int set_callback_pci_intx(XenEvtchnState *s, uint64_t param)
{
- uint32_t param = (uint32_t)s->callback_param;
+ PCMachineState *pcms = PC_MACHINE(qdev_get_machine());
+ uint8_t pin = param & 3;
+ uint8_t devfn = (param >> 8) & 0xff;
+ uint16_t bus = (param >> 16) & 0xffff;
+ uint16_t domain = (param >> 32) & 0xffff;
+ PCIDevice *pdev;
+ PCIINTxRoute r;
+
+ if (domain || !pcms) {
+ return 0;
+ }
- switch (s->callback_param >> CALLBACK_VIA_TYPE_SHIFT) {
- case HVM_PARAM_CALLBACK_TYPE_GSI:
- if (param < GSI_NUM_PINS) {
- qemu_set_irq(s->gsis[param], level);
- }
- break;
+ pdev = pci_find_device(pcms->bus, bus, devfn);
+ if (!pdev) {
+ return 0;
+ }
+
+ r = pci_device_route_intx_to_irq(pdev, pin);
+ if (r.mode != PCI_INTX_ENABLED) {
+ return 0;
+ }
+
+ /*
+ * Hm, can we be notified of INTX routing changes? Not without
+ * *owning* the device and being allowed to overwrite its own
+ * ->intx_routing_notifier, AFAICT. So let's not.
+ */
+ return r.irq;
+}
+
+static void xen_evtchn_set_callback_level(XenEvtchnState *s, int level)
+{
+ if (s->callback_gsi && s->callback_gsi < GSI_NUM_PINS) {
+ qemu_set_irq(s->gsis[s->callback_gsi], level);
}
}
@@ -231,6 +260,8 @@ int xen_evtchn_set_callback_param(uint64_t param)
{
XenEvtchnState *s = xen_evtchn_singleton;
bool in_kernel = false;
+ uint32_t gsi = 0;
+ int type = param >> CALLBACK_VIA_TYPE_SHIFT;
int ret;
if (!s) {
@@ -239,7 +270,7 @@ int xen_evtchn_set_callback_param(uint64_t param)
qemu_mutex_lock(&s->port_lock);
- switch (param >> CALLBACK_VIA_TYPE_SHIFT) {
+ switch (type) {
case HVM_PARAM_CALLBACK_TYPE_VECTOR: {
struct kvm_xen_hvm_attr xa = {
.type = KVM_XEN_ATTR_TYPE_UPCALL_VECTOR,
@@ -250,10 +281,17 @@ int xen_evtchn_set_callback_param(uint64_t param)
if (!ret && kvm_xen_has_cap(EVTCHN_SEND)) {
in_kernel = true;
}
+ gsi = 0;
break;
}
+ case HVM_PARAM_CALLBACK_TYPE_PCI_INTX:
+ gsi = set_callback_pci_intx(s, param);
+ ret = gsi ? 0 : -EINVAL;
+ break;
+
case HVM_PARAM_CALLBACK_TYPE_GSI:
+ gsi = (uint32_t)param;
ret = 0;
break;
@@ -265,6 +303,19 @@ int xen_evtchn_set_callback_param(uint64_t param)
if (!ret) {
s->callback_param = param;
s->evtchn_in_kernel = in_kernel;
+
+ if (gsi != s->callback_gsi) {
+ struct vcpu_info *vi = kvm_xen_get_vcpu_info_hva(0);
+
+ xen_evtchn_set_callback_level(s, 0);
+ s->callback_gsi = gsi;
+
+ if (gsi && vi && vi->evtchn_upcall_pending) {
+ /* The KVM code needs to know to check and deassert */
+ kvm_xen_inject_vcpu_callback_vector(0, type);
+ xen_evtchn_set_callback_level(s, 1);
+ }
+ }
}
qemu_mutex_unlock(&s->port_lock);
--
2.35.3
- [RFC PATCH v4 01/47] Xen HVM support under KVM, David Woodhouse, 2022/12/20
- [RFC PATCH v4 15/47] i386/xen: add pc_machine_kvm_type to initialize XEN_EMULATE mode, David Woodhouse, 2022/12/20
- [RFC PATCH v4 19/47] i386/xen: implement HYPERVISOR_hvm_op, David Woodhouse, 2022/12/20
- [RFC PATCH v4 11/47] i386/xen: implement HYPERVISOR_xen_version, David Woodhouse, 2022/12/20
- [RFC PATCH v4 44/47] hw/xen: Support mapping grant frames, David Woodhouse, 2022/12/20
- [RFC PATCH v4 24/47] i386/xen: implement HYPERVISOR_event_channel_op, David Woodhouse, 2022/12/20
- [RFC PATCH v4 14/47] hw/xen: Add xen_overlay device for emulating shared xenheap pages, David Woodhouse, 2022/12/20
- [RFC PATCH v4 41/47] hw/xen: Support HVM_PARAM_CALLBACK_TYPE_PCI_INTX callback,
David Woodhouse <=
- [RFC PATCH v4 01/47] include: import Xen public headers to include/standard-headers/, David Woodhouse, 2022/12/20
- [RFC PATCH v4 05/47] i386/kvm: handle Xen HVM cpuid leaves, David Woodhouse, 2022/12/20
- [RFC PATCH v4 21/47] i386/xen: handle VCPUOP_register_vcpu_info, David Woodhouse, 2022/12/20
- [RFC PATCH v4 13/47] i386/xen: Implement SCHEDOP_poll and SCHEDOP_yield, David Woodhouse, 2022/12/20
- [RFC PATCH v4 12/47] i386/xen: implement HYPERVISOR_sched_op, SCHEDOP_shutdown, David Woodhouse, 2022/12/20
- [RFC PATCH v4 35/47] hw/xen: Implement EVTCHNOP_alloc_unbound, David Woodhouse, 2022/12/20
- [RFC PATCH v4 03/47] xen: Add XEN_DISABLED mode and make it default, David Woodhouse, 2022/12/20
- [RFC PATCH v4 37/47] hw/xen: Implement EVTCHNOP_bind_vcpu, David Woodhouse, 2022/12/20
- [RFC PATCH v4 02/47] xen: add CONFIG_XENFV_MACHINE and CONFIG_XEN_EMU options for Xen emulation, David Woodhouse, 2022/12/20