qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH RFC] docs: add PCIe devices placement guidelines


From: Laine Stump
Subject: Re: [Qemu-devel] [PATCH RFC] docs: add PCIe devices placement guidelines
Date: Tue, 4 Oct 2016 14:08:45 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.3.0

On 10/04/2016 12:43 PM, Laszlo Ersek wrote:
On 10/04/16 18:10, Laine Stump wrote:
On 10/04/2016 11:40 AM, Laszlo Ersek wrote:
On 10/04/16 16:59, Daniel P. Berrange wrote:
On Mon, Sep 05, 2016 at 06:24:48PM +0200, Laszlo Ersek wrote:
All valid *high-level* topology goals should be permitted / covered one
way or another by this document, but in as few ways as possible --
hopefully only one way. For example, if you read the rest of the thread,
flat hierarchies are preferred to deeply nested hierarchies, because
flat ones save on bus numbers

Do they?

Yes. Nesting implies bridges, and bridges take up bus numbers. For
example, in a PCI Express switch, the upstream port of the switch
consumes a bus number, with no practical usefulness.

I'ts all just idle number games, but what I was thinking of was the difference between plugging a bunch of root-port+upstream+downstreamxN combos directly into pcie-root (flat), vs. plugging the first into pcie-root, and then subsequent ones into e.g. the last downstream port of the previous set. Take the simplest case of needing 63 hotpluggable slots. In the "flat" case, you have:

   2 x pcie-root-port
   2 x pcie-switch-upstream-port
   63 x pcie-switch-downstream-port

In the "nested" or "chained" case you have:

   1 x pcie-root-port
   1 x pcie-switch-upstream-port
   32 x pcie-downstream-port
   1 x pcie-switch-upstream-port
   32 x pcie-switch-downstream-port

so you use the same number of PCI controllers.

Of course if you're talking about the difference between using upstream+downstream vs. just having a bunch of pcie-root-ports directly on pcie-root then you're correct, but only marginally - for 63 hotpluggable ports, you would need 63 x pcie-root-port, so a savings of 4 controllers - about 6.5%. (Of course this is all moot since you run out of ioport space after, what, 7 controllers needing it anyway? :-P)


IIRC we collectively devised a flat pattern elsewhere in the thread
where you could exhaust the 0..255 bus number space such that almost
every bridge (= taking up a bus number) would also be capable of
accepting a hot-plugged or cold-plugged PCI Express device. That is,
practically no wasted bus numbers.

Hm.... search this message for "population algorithm":

https://www.mail-archive.com/address@hidden/msg394730.html

and then Gerd's big improvement / simplification on it, with multifunction:

https://www.mail-archive.com/address@hidden/msg395437.html

In Gerd's scheme, you'd only need only one or two (I'm lazy to count
exactly :)) PCI Express switches, to exhaust all bus numbers. Minimal
waste due to upstream ports.

Yep. And in response to his message, that's what I'm implementing as the default strategy in libvirt :-)





reply via email to

[Prev in Thread] Current Thread [Next in Thread]