qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH RFC] docs: add PCIe devices placement guidelines


From: Alex Williamson
Subject: Re: [Qemu-devel] [PATCH RFC] docs: add PCIe devices placement guidelines
Date: Tue, 4 Oct 2016 09:45:51 -0600

On Tue, 4 Oct 2016 15:59:11 +0100
"Daniel P. Berrange" <address@hidden> wrote:

> On Mon, Sep 05, 2016 at 06:24:48PM +0200, Laszlo Ersek wrote:
> > On 09/01/16 15:22, Marcel Apfelbaum wrote:  
> > > +2.3 PCI only hierarchy
> > > +======================
> > > +Legacy PCI devices can be plugged into pcie.0 as Integrated Devices or
> > > +into DMI-PCI bridge. PCI-PCI bridges can be plugged into DMI-PCI bridges
> > > +and can be nested until a depth of 6-7. DMI-BRIDGES should be plugged
> > > +only into pcie.0 bus.
> > > +
> > > +   pcie.0 bus
> > > +   ----------------------------------------------
> > > +        |                            |
> > > +   -----------               ------------------
> > > +   | PCI Dev |               | DMI-PCI BRIDGE |
> > > +   ----------                ------------------
> > > +                               |            |
> > > +                        -----------    ------------------
> > > +                        | PCI Dev |    | PCI-PCI Bridge |
> > > +                        -----------    ------------------
> > > +                                         |           |
> > > +                                  -----------     -----------
> > > +                                  | PCI Dev |     | PCI Dev |
> > > +                                  -----------     -----------  
> > 
> > Works for me, but I would again elaborate a little bit on keeping the
> > hierarchy flat.
> > 
> > First, in order to preserve compatibility with libvirt's current
> > behavior, let's not plug a PCI device directly in to the DMI-PCI bridge,
> > even if that's possible otherwise. Let's just say
> > 
> > - there should be at most one DMI-PCI bridge (if a legacy PCI hierarchy
> > is required),  
> 
> Why do you suggest this ? If the guest has multiple NUMA nodes
> and you're creating a PXB for each NUMA node, then it looks valid
> to want to have a DMI-PCI bridge attached to each PXB, so you can
> have legacy PCI devices on each NUMA node, instead of putting them
> all on the PCI bridge without NUMA affinity.

Seems like this is one of those "generic" vs "specific" device issues.
We use the DMI-to-PCI bridge as if it were a PCIe-to-PCI bridge, but
DMI is actually an Intel proprietary interface, the bridge just has the
same software interface as a PCI bridge.  So while you can use it as a
generic PCIe-to-PCI bridge, it's at least going to make me cringe every
time.
 
> > - only PCI-PCI bridges should be plugged into the DMI-PCI bridge,  
> 
> What's the rational for that, as opposed to plugging devices directly
> into the DMI-PCI bridge which seems to work ?

IIRC, something about hotplug, but from a PCI perspective it doesn't
make any sense to me either.  Same with the restriction from using slot
0 on PCI bridges, there's no basis for that except on the root bus.
Thanks,

Alex



reply via email to

[Prev in Thread] Current Thread [Next in Thread]