qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH RFC] docs: add PCIe devices placement guidelines


From: Marcel Apfelbaum
Subject: Re: [Qemu-devel] [PATCH RFC] docs: add PCIe devices placement guidelines
Date: Wed, 5 Oct 2016 13:03:38 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.1.1

On 10/04/2016 07:25 PM, Laine Stump wrote:
On 10/04/2016 11:45 AM, Alex Williamson wrote:
On Tue, 4 Oct 2016 15:59:11 +0100
"Daniel P. Berrange" <address@hidden> wrote:

On Mon, Sep 05, 2016 at 06:24:48PM +0200, Laszlo Ersek wrote:
On 09/01/16 15:22, Marcel Apfelbaum wrote:
+2.3 PCI only hierarchy
+======================
+Legacy PCI devices can be plugged into pcie.0 as Integrated Devices or
+into DMI-PCI bridge. PCI-PCI bridges can be plugged into DMI-PCI bridges
+and can be nested until a depth of 6-7. DMI-BRIDGES should be plugged
+only into pcie.0 bus.
+
+   pcie.0 bus
+   ----------------------------------------------
+        |                            |
+   -----------               ------------------
+   | PCI Dev |               | DMI-PCI BRIDGE |
+   ----------                ------------------
+                               |            |
+                        -----------    ------------------
+                        | PCI Dev |    | PCI-PCI Bridge |
+                        -----------    ------------------
+                                         |           |
+                                  -----------     -----------
+                                  | PCI Dev |     | PCI Dev |
+                                  -----------     -----------

Works for me, but I would again elaborate a little bit on keeping the
hierarchy flat.

First, in order to preserve compatibility with libvirt's current
behavior, let's not plug a PCI device directly in to the DMI-PCI bridge,
even if that's possible otherwise. Let's just say

- there should be at most one DMI-PCI bridge (if a legacy PCI hierarchy
is required),

Why do you suggest this ? If the guest has multiple NUMA nodes
and you're creating a PXB for each NUMA node, then it looks valid
to want to have a DMI-PCI bridge attached to each PXB, so you can
have legacy PCI devices on each NUMA node, instead of putting them
all on the PCI bridge without NUMA affinity.

Seems like this is one of those "generic" vs "specific" device issues.
We use the DMI-to-PCI bridge as if it were a PCIe-to-PCI bridge, but
DMI is actually an Intel proprietary interface, the bridge just has the
same software interface as a PCI bridge.  So while you can use it as a
generic PCIe-to-PCI bridge, it's at least going to make me cringe every
time.


If using it this way makes kittens cry or something, then we'd be happy to use 
a generic pcie-to-pci bridge if somebody created one :-)



- only PCI-PCI bridges should be plugged into the DMI-PCI bridge,

What's the rational for that, as opposed to plugging devices directly
into the DMI-PCI bridge which seems to work ?


Hi,

IIRC, something about hotplug, but from a PCI perspective it doesn't
make any sense to me either.


Indeed, the reason to plug the PCI bridge into the DMI-TO-PCI bridge
would be the hot-plug support.
The PCI bridges can support hotplug on Q35.
There is even an RFC on the list doing that:
    https://lists.gnu.org/archive/html/qemu-devel/2016-05/msg05681.html

With the DMI-PCI bridge is another story. From what I understand the actual
device (i82801b11) do not support hotplug and the chances to make it work
are minimal.



At one point Marcel and Michael were discussing the possibility of making 
hotplug work on a dmi-to-pci-bridge. Currently it doesn't even work for 
pci-bridge so (as I think I said in another message
just now) it is kind of pointless, although when I asked about eliminating use of 
pci-bridge in favor of just using dmi-to-pci-bridge directly, I got lots of 
"no" votes.


Since we have an RFC showing it is possible to have hotplug for PCI devices 
pluged into PCI bridges
it is better to continue using the PCI bridge until one of the bellow will 
happen:
 1 - pci-bridge ACPI hotplug will be possible
 2 - i82801b11 ACPI hotplug will be possible
 3 - a new pcie-pci bridge will be coded


 Same with the restriction from using slot
0 on PCI bridges, there's no basis for that except on the root bus.

I tried allowing devices to be plugged into slot 0 of a pci-bridge in libvirt - qemu 
barfed, so I moved the "minSlot" for pci-bridge back up to 1. Slot 0 is 
completely usable on a dmi-to-pci-bridge
though (and libvirt allows it). At this point, even if qemu enabled using slot 
0 of a pci-bridge, libvirt wouldn't be able to expose that to users (unless the 
min/max slot of each PCI controller was
made visible somewhere via QMP)


The reason for not being able to plug a device into slot 0 of a PCI Bridge is 
the SHPC (Hot-plug controller)
device embedded in the PCI bridge by default. The SHPC spec requires this.
If one disables it with shpc=false, he should be able to use the slot 0.

Funny thing, the SHPC is not actually used by neither i440fx or Q35 machines,
for i440fx we use ACPI based PCI hotplug and for Q35 we use PCIe native hotplug.

Should we default the shpc to off?

Thanks,
Marcel






reply via email to

[Prev in Thread] Current Thread [Next in Thread]