qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH RFC] docs: add PCIe devices placement guidelines


From: Marcel Apfelbaum
Subject: Re: [Qemu-devel] [PATCH RFC] docs: add PCIe devices placement guidelines
Date: Mon, 10 Oct 2016 17:36:17 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.1.1

On 10/10/2016 03:02 PM, Andrea Bolognani wrote:
On Tue, 2016-10-04 at 12:52 -0600, Alex Williamson wrote:
I'ts all just idle number games, but what I was thinking of was the
difference between plugging  a bunch of root-port+upstream+downstreamxN
combos directly into pcie-root (flat), vs. plugging the first into
pcie-root, and then subsequent ones into e.g. the last downstream port
of the previous set. Take the simplest case of needing 63 hotpluggable
slots. In the "flat" case, you have:

    2 x pcie-root-port
     2 x pcie-switch-upstream-port
     63 x pcie-switch-downstream-port

In the "nested" or "chained" case you have:

     1 x pcie-root-port
     1 x pcie-switch-upstream-port
     32 x pcie-downstream-port
     1 x pcie-switch-upstream-port
     32 x pcie-switch-downstream-port

You're not thinking in enough dimensions.  A single root port can host
multiple sub-hierarchies on it's own.  We can have a multi-function
upstream switch, so you can have 8 upstream ports (00.{0-7}).  If we
implemented ARI on the upstream ports, we could have 256 upstream ports
attached to a single root port, but of course then we've run out of
bus numbers before we've even gotten to actual devices buses.

Another option, look at the downstream ports, why do they each need to
be in separate slots?  We have the address space of an entire bus to
work with, so we can also create multi-function downstream ports, which
gives us 256 downstream ports per upstream port.  Oops, we just ran out
of bus numbers again, but at least actual devices can be attached.

What's the advantage in using ARI to stuff more than eight
of anything that's not Endpoint Devices in a single slot?

I mean, if we just fill up all 32 slots in a PCIe Root Bus
with 8 PCIe Root Ports each we already end up having 256
hotpluggable slots[1]. Why would it be preferable to use
ARI, or even PCIe Switches, instead?


What if you need more devices (functions actually) ?

If some of the pcie.0 slots are occupied by other Integrated devices
and you need more than 256 functions you can:
(1) Add a PCIe Switch - if you need hot-plug support -an you are pretty limited
    by the bus numbers, but it will give you a few more slots.
(2) Use multi-function devices per root port if you are not interested in 
hotplug.
    In this case ARI will give you up to 256 devices per Root Port.

Now the question is why ARI? Better utilization of the "problematic"
resources like Bus numbers and IO space; all that if you need an insane
number of devices, but we don't judge :).

Thanks,
Marcel


[1] The last slot will have to be limited to 7 PCIe Root
    Ports if we don't want to run out of bus numbers

I don't follow how this will 'save' us. If all the root ports
are in use and you leave space for one more, what can you do with it?

--
Andrea Bolognani / Red Hat / Virtualization





reply via email to

[Prev in Thread] Current Thread [Next in Thread]