qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] PCI passthrough of 40G ethernet interface (Openstack/KV


From: Bandan Das
Subject: Re: [Qemu-devel] PCI passthrough of 40G ethernet interface (Openstack/KVM)
Date: Wed, 18 Mar 2015 11:24:14 -0400
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.4 (gnu/linux)

[Ccing netdev and Stefan]
Bandan Das <address@hidden> writes:

> jacob jacob <address@hidden> writes:
>
>> On Mon, Mar 16, 2015 at 2:12 PM, Bandan Das <address@hidden> wrote:
>>> jacob jacob <address@hidden> writes:
>>>
>>>> I also see the following in dmesg in the VM.
>>>>
>>>> [    0.095758] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
>>>> [    0.096006] acpi PNP0A03:00: ACPI _OSC support notification failed,
>>>> disabling PCIe ASPM
>>>> [    0.096915] acpi PNP0A03:00: Unable to request _OSC control (_OSC
>>>> support mask: 0x08)
>>> IIRC, For OSC control, after BIOS is done with (whatever initialization
>>> it needs to do), it clears a bit so that the OS can take over. This message,
>>> you are getting is a sign of a bug in the BIOS (usually). But I don't
>>> know if this is related to your problem. Does "dmesg | grep -e DMAR -e 
>>> IOMMU"
>>> give anything useful ?
>>
>> Do not see anything useful in the output..
>
> Ok, Thanks. Can you please post the output as well ?
>
>>>> [    0.097072] acpi PNP0A03:00: fail to add MMCONFIG information,
>>>> can't access extended PCI configuration space under this bridge.
>>>>
>>>> Does this indicate any issue related to PCI passthrough?
>>>>
>>>> Would really appreciate any input on how to bebug this further.
>>>
>>> Did you get a chance to try a newer kernel ?
>> Currently am using 3.18.7-200.fc21.x86_64 which is pretty recent.
>> Are you suggesting trying the newer kernel just on the host? (or VM too?)
> Both preferably to 3.19. But it's just a wild guess. I saw i40e related fixes,
> particularly "i40e: fix un-necessary Tx hangs" in 3.19-rc5. This is not 
> exactly
> what you are seeing but I was still wondering if it could help.

Actually, Stefan suggests that support for this card is still sketchy
and your best bet is to try out net-next
http://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git

Also, could you please post more information about your hardware setup
(chipset/processor/firmware version on the card etc) ?

Thanks,
Bandan

> Meanwhile, I am trying to get hold of a card myself to try and reproduce
> it at my end.
>
> Thanks,
> Bandan
>
>>>> On Fri, Mar 13, 2015 at 10:08 AM, jacob jacob <address@hidden> wrote:
>>>>>> So, it could be the i40e driver then ? Because IIUC, VFs use a separate
>>>>>> driver. Just to rule out the possibility that there might be some driver 
>>>>>> fixes that
>>>>>> could help with this, it might be a good idea to try a 3.19 or later 
>>>>>> upstream
>>>>>> kernel.
>>>>>>
>>>>>
>>>>> I tried with the latest DPDK release too (dpdk-1.8.0) and see the same 
>>>>> issue.
>>>>> As mentioned earlier, i do not see any issues at all when running
>>>>> tests using either i40e or dpdk on the host itself.
>>>>> This is the reason why i am suspecting if it is anything to do with 
>>>>> KVM/libvirt.
>>>>> Both with regular PCI passthrough and VF passthrough i see issues. It
>>>>> is always pointing to some issue with packet transmission. Receive
>>>>> seems to work ok.
>>>>>
>>>>>
>>>>> On Thu, Mar 12, 2015 at 8:02 PM, Bandan Das <address@hidden> wrote:
>>>>>> jacob jacob <address@hidden> writes:
>>>>>>
>>>>>>> On Thu, Mar 12, 2015 at 3:07 PM, Bandan Das <address@hidden> wrote:
>>>>>>>> jacob jacob <address@hidden> writes:
>>>>>>>>
>>>>>>>>>  Hi,
>>>>>>>>>
>>>>>>>>>  Seeing failures when trying to do PCI passthrough of Intel XL710 40G
>>>>>>>>> interface to KVM vm.
>>>>>>>>>      0a:00.1 Ethernet controller: Intel Corporation Ethernet
>>>>>>>>> Controller XL710 for 40GbE QSFP+ (rev 01)
>>>>>>>>
>>>>>>>> You are assigning the PF right ? Does assigning VFs work or it's
>>>>>>>> the same behavior ?
>>>>>>>
>>>>>>> Yes.Assigning VFs worked ok.But this had other issues while bringing 
>>>>>>> down VMs.
>>>>>>> Interested in finding out if PCI passthrough of 40G intel XL710
>>>>>>> interface is qualified in some specific kernel/kvm release.
>>>>>>
>>>>>> So, it could be the i40e driver then ? Because IIUC, VFs use a separate
>>>>>> driver. Just to rule out the possibility that there might be some driver 
>>>>>> fixes that
>>>>>> could help with this, it might be a good idea to try a 3.19 or later 
>>>>>> upstream
>>>>>> kernel.
>>>>>>
>>>>>>>>> From dmesg on host:
>>>>>>>>>
>>>>>>>>>> [80326.559674] kvm: zapping shadow pages for mmio generation 
>>>>>>>>>> wraparound
>>>>>>>>>> [80327.271191] kvm [175994]: vcpu0 unhandled rdmsr: 0x1c9
>>>>>>>>>> [80327.271689] kvm [175994]: vcpu0 unhandled rdmsr: 0x1a6
>>>>>>>>>> [80327.272201] kvm [175994]: vcpu0 unhandled rdmsr: 0x1a7
>>>>>>>>>> [80327.272681] kvm [175994]: vcpu0 unhandled rdmsr: 0x3f6
>>>>>>>>>> [80327.376186] kvm [175994]: vcpu0 unhandled rdmsr: 0x606
>>>>>>>>
>>>>>>>> These are harmless and are related to unimplemented PMU msrs,
>>>>>>>> not VFIO.
>>>>>>>>
>>>>>>>> Bandan
>>>>>>> --
>>>>>>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>>>>>>> the body of a message to address@hidden
>>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html



reply via email to

[Prev in Thread] Current Thread [Next in Thread]