|
From: | Sandesh Patel |
Subject: | Re: More than 255 vcpus Windows VM setup without viommu ? |
Date: | Wed, 3 Jul 2024 16:01:47 +0000 |
Thanks David for the response.
The KVM_SET_GSI_ROUTING ioctl calls kvm_set_routing_entry function in kvm. const struct kvm_irq_routing_entry *ue) { switch (ue->type) { case KVM_IRQ_ROUTING_MSI: e->set = kvm_set_msi; e->msi.address_lo = ue->u.msi.address_lo; e->msi.address_hi = ue->u.msi.address_hi; e->msi.data = ""> if (kvm_msi_route_invalid(kvm, e)) return -EINVAL; break; } } static inline bool kvm_msi_route_invalid(struct kvm *kvm, struct kvm_kernel_irq_routing_entry *e) { return kvm->arch.x2apic_format && (e->msi.address_hi & 0xff); } That means msi.address_hi must have 0 in the last byte. Qemu function kvm_arch_fixup_msi_route is responsible for fixing msi.address_hi value in msi routing entry that is passed to kvm.
This function got msi.addr_hi: 0x0 in input when iommu was enabled and msi.addr_hi: 0x1 when viommu was not enabled for one of the entry. The same value was returned in the output.
and saved as routing entry.
I think not. Looks like there is difference in approach how hyperv limits the irq delivery vs how Qemu/kvm do it.
Thanks for the suggestion. It avoids DMA translations and hence no major performance loss.
|
[Prev in Thread] | Current Thread | [Next in Thread] |