qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] About virtio device hotplug in Q35! 【外域邮件.谨慎查阅】


From: Bob Chen
Subject: Re: [Qemu-devel] About virtio device hotplug in Q35! 【外域邮件.谨慎查阅】
Date: Tue, 1 Aug 2017 13:04:46 +0800

Hi,

This is a sketch of my hardware topology.

          CPU0         <- QPI ->        CPU1
           |                             |
    Root Port(at PCIe.0)        Root Port(at PCIe.1)
       /        \                   /       \
    Switch    Switch             Switch    Switch
     /   \      /  \              /   \    /    \
   GPU   GPU  GPU  GPU          GPU   GPU GPU   GPU


And below are the p2p bandwidth test results.

Host:
   D\D     0      1      2      3      4      5      6      7
     0 426.91  25.32  19.72  19.72  19.69  19.68  19.75  19.66
     1  25.31 427.61  19.74  19.72  19.66  19.68  19.74  19.73
     2  19.73  19.73 429.49  25.33  19.66  19.74  19.73  19.74
     3  19.72  19.71  25.36 426.68  19.70  19.71  19.77  19.74
     4  19.72  19.72  19.73  19.75 425.75  25.33  19.72  19.71
     5  19.71  19.75  19.76  19.75  25.35 428.11  19.69  19.70
     6  19.76  19.72  19.79  19.78  19.73  19.74 425.75  25.35
     7  19.69  19.75  19.79  19.75  19.72  19.72  25.39 427.15

VM:
   D\D     0      1      2      3      4      5      6      7
     0 427.38  10.52  18.99  19.11  19.75  19.62  19.75  19.71
     1  10.53 426.68  19.28  19.19  19.73  19.71  19.72  19.73
     2  18.88  19.30 426.92  10.48  19.66  19.71  19.67  19.68
     3  18.93  19.18  10.45 426.94  19.69  19.72  19.67  19.72
     4  19.60  19.66  19.69  19.70 428.13  10.49  19.40  19.57
     5  19.52  19.74  19.72  19.69  10.44 426.45  19.68  19.61
     6  19.63  19.50  19.72  19.64  19.59  19.66 426.91  10.47
     7  19.69  19.75  19.70  19.69  19.66  19.74  10.45 426.23


In the VM, the bandwidth between two GPUs under the same physical switch is
obviously lower, as per the reasons you said in former threads.

But what confused me most is that GPUs under different switches could
achieve the same speed, as well as in the Host. Does that mean after IOMMU
address translation, data traversing has utilized QPI bus by default? Even
these two devices do not belong to the same PCIe bus?

In a word, I'm trying to build a massive deep-learning/HPC infrastructure
for the cloud environment. Nvidia itself released a solution based on
dockers, and I believe qemu/VMs could also do it. Hopefully I could get
some help from the community.

The emulated switch you suggested looks like a good option to me, I will
have a try.


Thanks,
Bob


2017-07-27 1:32 GMT+08:00 Alex Williamson <address@hidden>:

> On Wed, 26 Jul 2017 19:06:58 +0300
> "Michael S. Tsirkin" <address@hidden> wrote:
>
> > On Wed, Jul 26, 2017 at 09:29:31AM -0600, Alex Williamson wrote:
> > > On Wed, 26 Jul 2017 09:21:38 +0300
> > > Marcel Apfelbaum <address@hidden> wrote:
> > >
> > > > On 25/07/2017 11:53, 陈博 wrote:
> > > > > To accelerate data traversing between devices under the same PCIE
> Root
> > > > > Port or Switch.
> > > > >
> > > > > See https://lists.nongnu.org/archive/html/qemu-devel/2017-
> 07/msg07209.html
> > > > >
> > > >
> > > > Hi,
> > > >
> > > > It may be possible, but maybe PCIe Switch assignment is not
> > > > the only way to go.
> > > >
> > > > Adding Alex and Michael for their input on this matter.
> > > > More info at:
> > > > https://lists.nongnu.org/archive/html/qemu-devel/2017-
> 07/msg07209.html
> > >
> > > I think you need to look at where the IOMMU is in the topology and what
> > > address space the devices are working in when assigned to a VM to
> > > realize that it doesn't make any sense to assign switch ports to a VM.
> > > GPUs cannot do switch level peer to peer when assigned because they are
> > > operating in an I/O virtual address space.  This is why we configure
> > > ACS on downstream ports to prevent peer to peer.  Peer to peer
> > > transactions must be forwarded upstream by the switch ports in order to
> > > reach the IOMMU for translation.  Note however that we do populate peer
> > > to peer mappings within the IOMMU, so if the hardware supports it, the
> > > IOMMU can reflect the transaction back out to the I/O bus to reach the
> > > other device without CPU involvement.
> > >
> > > Therefore I think the better solution, if it encourages the NVIDIA
> > > driver to do the right thing, is to use emulated switches.  Assigning
> > > the physical switch would really do nothing more than make the PCIe
> link
> > > information more correct in the VM, everything else about the switch
> > > would be emulated.  Even still, unless you have an I/O topology which
> > > integrates the IOMMU into the switch itself, the data flow still needs
> > > to go all the way to the root complex to hit the IOMMU before being
> > > reflected to the other device.  Direct peer to peer between downstream
> > > switch ports operates in the wrong address space.  Thanks,
> > >
> > > Alex
> >
> > That's true of course. What would make sense would be for
> > hardware vendors to add ATS support to their cards.
> >
> > Then peer to peer should be allowed by hypervisor for translated
> transactions.
> >
> > Gives you the performance benefit without the security issues.
> >
> > Does anyone know whether any hardware implements this?
>
> GPUs often do implement ATS and the ACS DT (Direct Translated P2P)
> capability should handle routing requests with the Address Type field
> indicating a translated address directly between downstream ports.  DT
> is however not part of the standard set of ACS bits that we enable.  It
> seems like it might be fairly easy to poke the DT enable bit with
> setpci from userspace to test whether this "just works", providing of
> course you can get the driver to attempt to do peer to peer and ATS is
> already functioning on the GPU.  If so, then we should look at where
> in the code to do that enabling automatically.  Thanks,
>
> Alex
>
>


reply via email to

[Prev in Thread] Current Thread [Next in Thread]