qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Question] Why doesn't PCIe hotplug work for Q35 machin


From: Knut Omang
Subject: Re: [Qemu-devel] [Question] Why doesn't PCIe hotplug work for Q35 machine?
Date: Wed, 20 Aug 2014 06:39:24 +0200

On Wed, 2014-08-20 at 02:16 +0000, Gonglei (Arei) wrote:
> > -----Original Message-----
> > From: Michael S. Tsirkin [mailto:address@hidden
> > Sent: Wednesday, August 20, 2014 5:19 AM
> > To: Gonglei (Arei)
> > Cc: Paolo Bonzini; Marcel Apfelbaum; address@hidden;
> > address@hidden; address@hidden; Huangweidong (C)
> > Subject: Re: [Question] Why doesn't PCIe hotplug work for Q35 machine?
> > 
> > On Tue, Aug 19, 2014 at 06:25:56AM +0000, Gonglei (Arei) wrote:
> > > > >> Subject: Re: [Question] Why doesn't PCIe hotplug work for Q35 
> > > > >> machine?
> > > > >>
> > > > >> On Sun, 2014-08-17 at 13:00 +0200, Michael S. Tsirkin wrote:
> > > > >>> On Fri, Aug 15, 2014 at 07:33:29AM +0000, Gonglei (Arei) wrote:
> > > > >>>> Hi,
> > > > >>>>
> > > > >>>> I noticed that the qemu-2.1 release change log says
> > > > >>>> " PCIe: Basic hot-plug/hot-unplug support for Q35 machine."
> > > > >>>> And then I made a testing for the hotplugging function of Q35.
> > > > >>>> But I'm failed, and I got the dmesg log in guest os as below:
> > > > >>>>
> > > > >>>> [ 159.035250] Pciehp 0000:05:00.0:pcie24: Button pressed on Slot 
> > > > >>>> (0 -
> > 4)
> > > > >>>> [ 159.035274] Pciehp 0000:05:00.0:pcie24: Card present on Slot (0 
> > > > >>>> - 4)
> > > > >>>> [ 159.036517] Pciehp 0000:05:00.0:pcie24: PCI slot #0 - 4 - 
> > > > >>>> powering
> > on
> > > > due
> > > > >> to button press.
> > > > >>>> [ 159.188049] Pciehp 0000:05:00.0:pcie24: Failed to check link 
> > > > >>>> status
> > > > >>>> [ 159.201968] Pciehp 0000:05:00.0:pcie24: Card not present on Slot 
> > > > >>>> (0
> > - 4)
> > > > >>>> [ 159.202529] Pciehp 0000:05:00.0:pcie24: Already disabled on Slot 
> > > > >>>> (0
> > - 4)
> > > > >>>>
> > > > >>>> Steps of testing:
> > > > >>>>
> > > > >>>> #1. QEMU version:
> > > > >>>>
> > > > >>>>    The lateset master tree source.
> > > > >>>>
> > > > >>>> #2. Command line:
> > > > >>>>
> > > > >>>> ./qemu-system-x86_64 -enable-kvm -m 2048 -machine q35 -device
> > > > >> ide-drive,bus=ide.2,drive=MacHDD \
> > > > >>>>  -drive
> > > > id=MacHDD,if=none,file=/mnt/sdb/gonglei/image/redhat_q35.img
> > > > >> -monitor stdio -vnc :10 -readconfig ../docs/q35-chipset.cfg
> > > > >>>> QEMU 2.0.93 monitor - type 'help' for more information
> > > > >>>> (qemu) device_add
> > > > >> virtio-net-pci,id=nic2,bus=pcie-switch-downstream-port-1-1,addr=1.0
> > > > >>>
> > > > >>> I don't think you can use any slot except slot 0 for pci express.
> > > > >
> > > > > OK. Does the PCIe specification say that?
> > > > > I appreciate very much that you explain more.
> > > >
> > > > The closest I could find is in "7.3. Configuration Transaction
> > > > Rules"/"7.3.1. Device Number":
> > > >
> > > > With non-ARI Devices, PCI Express components are restricted to
> > > > implementing a single Device Number on their primary interface (Upstream
> > > > Port) [...] Downstream Ports that do not have ARI Forwarding enabled
> > > > must associate only Device 0 with the device attached to the Logical Bus
> > > > representing the Link from the Port. Configuration Requests
> > > > targeting the Bus Number associated with a Link specifying Device Number
> > > > 0 are delivered to the device attached to the Link; Configuration
> > > > Requests specifying all other Device Numbers (1-31)
> > > > must be terminated by the Switch Downstream Port or the Root Port with
> > > > an Unsupported Request Completion Status (equivalent to Master Abort in
> > > > PCI).
> > > >
> > > Thanks a lot, Paolo.
> > > And I found another issue when cold-plugging don't using slot 0, the PCIe
> > device also can't
> > > be searched in guest os.
> > >
> > > So, I have some questions and ideas:
> > >
> > > 1. Does qemu support ARI Forwarding for PCIe at present? If yes, how to
> > enable it ?
> > 
> > What do you mean by forwarding?
> 
> Just ARI supporting, came from the PCIe spec.
> 
> > What would you like to do?
> 
> I just want to add some check. Because it made me confused, 
> and other people IMHO.
> 
> > We do have code to emulate ARI, I don't think many people
> > tested it so it's likely incomplete.
> > 
> Yes, you have said (pcie_ari_init).

I have actually been playing around with this a bit in the context of
SR/IOV, and have some patches waiting to be posted. I haven't had any
success with the downstream switch - as you indicate, pieces may be
missing, but in my case, using the ioh3420 root port worked fine after
setting the ARI forwarding capability and a few other minor fixes that
may even be considered trivial patches. I will put my acts together and
rebase these and post in a separate mail.

Basically I am able to run and hotplug an ARI capable device on any root
port. So instead of using the downstream switch, you could just add
another ioh3420 for your next device, if that suits your needs.

Note that there are two places to add "ARI support", one is the
forwarding capability of bridges/switches, the other is the device's
capability to be an ARI device, represented by the ARI PCIe capability.

> > > 2. If not, we should add some check for PCIe root ports and downstream
> > ports,
> > >  meanwhile add explaining document.
> > 
> > You want an attempt to add a device at slot !=0 to report an error?
> > We can do this if device at slot 0 does not have ARI support.
> > Seems like a low priority issue I think.

If I understand the spec right it is devices in slots > 0 that is
limited, it is that each slot may only expose one device, even if that
device has more than 8 functions and supports ARI, in which case we
should create a master abort. But the better solution in my view would
be just to implement ARI forwarding..

Knut

> Hmm. I have post a patch you have seen. :)
> 
> > > 3. Those check should add in general code level, both hotplug and 
> > > coldplug.
> > 
> > Generally it's not clear how we want to support hotplug for
> > multifunction devices. One way it to add functions != 0 then
> > add function 0 and notify guest.
> > If so, then of course we can't check things ...
> > 
> Actually, I just check the device number is 0 or not, don't include functions.
> 
> BTW, I have tested a scenario: 
> 
> Command line:
> 
> ./qemu-system-x86_64 -enable-kvm -m 2048 -machine q35 -device 
> ide-drive,bus=ide.2,drive=MacHDD -drive id=MacHDD,if=none, \ 
> file=/mnt/sdb/gonglei/image/redhat_q35.img -monitor stdio -vnc :10 
> -readconfig ../docs/q35-chipset.cfg \
> -device virtio-net-pci,id=nic1,bus=pcie-switch-downstream-port-1-1,addr=0.1   
>           #coldplug slot 0 function 1
> QEMU 2.1.50 monitor - type 'help' for more information
> (qemu) device_add 
> virtio-net-pci,id=nic2,bus=pcie-switch-downstream-port-1-1,addr=0.0   
> #hotplug slot 0 function 0
> (qemu) info network
> hub 0
>  \ user.0: index=0,type=user,net=10.0.2.0,restrict=off
>  \ e1000.0: index=0,type=nic,model=e1000,macaddr=52:54:00:12:34:56
> nic2: index=0,type=nic,model=virtio-net-pci,macaddr=52:54:00:12:34:58
> (qemu)
> 
> And finally, the guest os only recognized nic2.
> 
> Best regards,
> -Gonglei
> 





reply via email to

[Prev in Thread] Current Thread [Next in Thread]