Hi Tomasz,
On 01/08/2017 13:01, Tomasz Nowicki wrote:
Hi Eric,
Just letting you know that I am facing another issue with the following
setup:
1. host (4.12 kernel & 64K page) and VM (4.12 kernel & 64K page)
2. QEMU + -netdev type=tap,ifname=tap,id=net0 -device
virtio-net-pci,netdev=net0,iommu_platform,disable-modern=off,disable-legacy=on
2. On VM, I allocate some huge pages and run DPDK testpmd app:
# echo 4 > /sys/kernel/mm/hugepages/hugepages-524288kB/nr_hugepages
# ./dpdk/usertools/dpdk-devbind.py -b vfio-pci 0000:00:02.0
# ./dpdk/build/app/testpmd -l 0-13 -n 4 -w 0000:00:02.0 --
--disable-hw-vlan-filter --disable-rss -i
EAL: Detected 14 lcore(s)
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:00:02.0 on NUMA socket -1
EAL: probe driver: 1af4:1041 net_virtio
EAL: using IOMMU type 1 (Type 1)
EAL: iommu_map_dma vaddr ffff20000000 size 80000000 iova 120000000
EAL: Can't write to PCI bar (0) : offset (12)
EAL: Can't read from PCI bar (0) : offset (12)
EAL: Can't read from PCI bar (0) : offset (12)
EAL: Can't write to PCI bar (0) : offset (12)
EAL: Can't read from PCI bar (0) : offset (12)
EAL: Can't write to PCI bar (0) : offset (12)
EAL: Can't read from PCI bar (0) : offset (0)
EAL: Can't write to PCI bar (0) : offset (4)
EAL: Can't write to PCI bar (0) : offset (14)
EAL: Can't write to PCI bar (0) : offset (e)
EAL: Can't read from PCI bar (0) : offset (c)
EAL: Requested device 0000:00:02.0 cannot be used
EAL: No probed ethernet devices
Interactive-mode selected
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=251456, size=2176,
socket=0
When VM uses *4K pages* the same setup works fine. I will work on this
but please let me know in case you already know what is going on.
No I did not face that one. I was able to launch testpmd without such
early message. However I assigned an igbvf device to the guest and then
to DPDK. I've never tested your config.
However as stated in my cover letter at the moment DPDK is not working
for me because of storms of tlbi-on-maps. I intend to work on this as
soon as get some bandwidth, sorry.