qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] >256 Virtio-net-pci hotplug Devices


From: Marcel Apfelbaum
Subject: Re: [Qemu-devel] >256 Virtio-net-pci hotplug Devices
Date: Sun, 23 Jul 2017 19:28:01 +0300
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:52.0) Gecko/20100101 Thunderbird/52.1.1

On 22/07/2017 2:57, Kinsella, Ray wrote:

Hi Marcel



Hi Ray,

On 21/07/2017 01:33, Marcel Apfelbaum wrote:
On 20/07/2017 3:44, Kinsella, Ray wrote:
That's strange. Please ensure the virtio devices are working in
virtio 1.0 mode (disable-modern=0,disable-legacy=1).
Let us know any problems you see.

Not sure what yet, I will try scaling it with hotplugging tomorrow.


Updates?

I have managed to scale it to 128 devices.
The kernel does complain about IO address space exhaustion.

[   83.697956] pci 0000:80:00.0: BAR 13: no space for [io  size 0x1000]
[   83.700958] pci 0000:80:00.0: BAR 13: failed to assign [io  size 0x1000]
[   83.701689] pci 0000:80:00.1: BAR 13: no space for [io  size 0x1000]
[   83.702378] pci 0000:80:00.1: BAR 13: failed to assign [io  size 0x1000]
[   83.703093] pci 0000:80:00.2: BAR 13: no space for [io  size 0x1000]

I was surprised that I am running out of IO address space, as I am disabling legacy virtio. I assumed that this would remove the need for SeaBIOS to allocate the PCI Express Root Port IO address space.

Indeed, SeeBIOS does not reserve IO ports in this case, but Linux kernel
still decides ""it knows better" and tries to allocate IO resources
anyway. It does not affect the "modern" virtio-net devices because
they don't need IO ports anyway.

One way to work around the error message is to have the PCIe Root Port
have the corresponding IO headers read-only since IO support is
optional. I tried this some time ago, I'll get back to it.

In any case - it doesn't stop the virtio-net device coming up and working as expected.


Right.

[  668.692081] virtio_net virtio103 enp141s0f4: renamed from eth101
[  668.707114] virtio_net virtio130 enp144s0f7: renamed from eth128
[  668.719795] virtio_net virtio129 enp144s0f6: renamed from eth127

I encountered some issues in vhost, due to open file exhaustion but resolved these with 'ulimit' in the usual way - burned alot of time on that today.

When scaling up to 512 Virtio-net devices SeaBIOS appears to really slow down when configuring PCI Config space - haven't manage to get this to work yet.


Adding SeaBIOS mailing list and maintainers, maybe there is a known
issue about 500+ PCI devices configuration.

Not really. All you have to do is to add a property to the pxb-pci/pxb
devices: pci_domain=x; then update the ACPI table to include the pxb
domain. You also have to tweak a little the pxb-pcie/pxb devices
to not share the bus numbers if pci_domain > 0.

Thanks for information, will add to the list.


Is also on my todo list :)

Thanks,
Marcel

Ray K
\




reply via email to

[Prev in Thread] Current Thread [Next in Thread]