qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/2 RFC] s390x/pci: rework pci infrastructure mo


From: Frank Blaschka
Subject: Re: [Qemu-devel] [PATCH 2/2 RFC] s390x/pci: rework pci infrastructure modeling
Date: Tue, 17 Mar 2015 13:15:33 +0100
User-agent: Mutt/1.5.17 (2007-11-01)

On Tue, Mar 17, 2015 at 08:11:14AM +0100, Alexander Graf wrote:
> 
> 
> On 12.03.15 16:22, Michael S. Tsirkin wrote:
> > On Thu, Mar 12, 2015 at 09:59:59AM -0500, Alexander Graf wrote:
> >>
> >>
> >> On 12.03.15 08:16, Frank Blaschka wrote:
> >>> On Thu, Mar 12, 2015 at 11:50:02AM +0100, Frank Blaschka wrote:
> >>>> On Thu, Mar 12, 2015 at 11:03:50AM +0100, Michael S. Tsirkin wrote:
> >>>>> On Thu, Mar 12, 2015 at 10:54:24AM +0100, Frank Blaschka wrote:
> >>>>>> On Wed, Mar 11, 2015 at 06:42:34PM +0100, Michael S. Tsirkin wrote:
> >>>>>>> On Wed, Mar 11, 2015 at 03:38:44PM +0100, Frank Blaschka wrote:
> >>>>>>>> On Tue, Mar 10, 2015 at 03:26:23PM +0100, Michael S. Tsirkin wrote:
> >>>>>>>>> On Tue, Mar 10, 2015 at 02:03:34PM +0100, Frank Blaschka wrote:
> >>>>>>>>>> This patch changes the modeling of the s390 qemu pci 
> >>>>>>>>>> infrastructure to
> >>>>>>>>>> better match the actual pci architecture defined by the real 
> >>>>>>>>>> hardware.
> >>>>>>>>>>
> >>>>>>>>>> A pci host bridge like device (s390-pcihost) models the abstract 
> >>>>>>>>>> view
> >>>>>>>>>> of the bare pci function. It provides s390 specific configuration
> >>>>>>>>>> attributes (fid and uid) for the attached pci device. The host 
> >>>>>>>>>> bridge
> >>>>>>>>>> restrict the pci bus to just hold one single pci device. Also we 
> >>>>>>>>>> have
> >>>>>>>>>> to make the s390 pci host bridge hot plugable.
> >>>>>>>>>
> >>>>>>>>> This requirement is really because of the 1 device per bus
> >>>>>>>>> limitation, isn't it?
> >>>>>>>>> If you supported many devices per bus, you could use
> >>>>>>>>> hotplug there and there won't be need to support hotplug
> >>>>>>>>> of the host bridge.
> >>>>>>>>>
> >>>>>>>> Absolutely yes. Have you seen my first proposal?
> >>>>>>>> It basically exploits the normal pci bridge/bus/slot mechanism but 
> >>>>>>>> need
> >>>>>>>> a place to store s390 specific configuration attributes.
> >>>>>>>>
> >>>>>>>> The idea of a host bridge having this attributes and limit the bus
> >>>>>>>> to one slot was an alternate design approach suggested by Alex.
> >>>>>>>>
> >>>>>>>> I like Alex's idea because:
> >>>>>>>> 1) It reflects pretty well the actual nature of the pci system in 
> >>>>>>>> real s390 hw
> >>>>>>>> 2) It does not create an somehow "artifical" pci topology
> >>>>>>>>
> >>>>>>>
> >>>>>>> I'll have to re-read but here's a thought: use your patch but
> >>>>>>> remove host bridge hotplug support code.
> >>>>>>> Stick a standard bridge with shpc support in the single slot
> >>>>>>> behind your host bridge (existing pci-bridge-dev should do the trick,
> >>>>>>> though not many people use it, so you might
> >>>>>>> run into bugs, but fixing them is a good idea anyway).
> >>>>>>> You can instanciate it automatically like Marcel's patches do
> >>>>>>> for PXB.
> >>>>>> Still don't undertsand so I try to summarize in my words please 
> >>>>>> corrent me
> >>>>>> if I got something wrong
> >>>>>>
> >>>>>> - create a standard host bridge
> >>>>>> - change the s390-pcihost to be a pci 2 pci bridge
> >>>>>
> >>>>> Actually I suggested simply adding a pci 2 pci bridge behind
> >>>>> s390-pcihost.
> >>>>>
> >>>>>> - now we can hotplug the s390-pcihost + hotplug a pci device to this
> >>>>>>   s390-pcihost using standard pci hotplug mechanism
> >>>>>
> >>>>> My idea was to just hotplug a pci device behind the standard pci 2 pci
> >>>>> bridge. don't support hotplugging bridge itself or s390-pcihost itself.
> >>>>>
> >>>>>> - we keep the 1 slot limit on the s390-pcihost. We need a place to
> >>>>>>   store fid and uid information (see mail thread from my 1 proposal)
> >>>>>
> >>>>> Yes.
> >>>>>
> >>>>>> - If we need more than 32 pci functions we have to extend the primary 
> >>>>>> pci bus
> >>>>>>   via standart pci 2 pci bridges or add another standart host bridge
> >>>>>>
> >>>>>> Is this your suggestion?
> >>>>>
> >>>>> Almost, clarifications above.
> >>>>>
> >>>> OK, got your idea. Have to think about it and may do some prototyping. 
> >>>> THX!
> >>>>
> >>>
> >>> hm, after thinking more about this I realized this is not working for us.
> >>> Remember we need a place to store the fid and uid attributes. This place
> >>> must be:
> >>> 1) uid/fid per pci device
> >>> 2) uid/fid in a hotplugable device
> >>>
> >>> I have the feeling we are at the beginning again. Although I liked Alex's
> >>> idea (host bridge containing uid/fid and having only 1 slot on the bus), 
> >>> it
> >>> looks like we end up at my first proposal. This does not require any
> >>> modification in base pci/bus code.
> >>>
> >>> Thx to all of you for the discussion and suggestions.
> >>
> >> I disagree with the assessment. The reason mst was opposed to do the
> >> one-phb-per-device implementation (which is the closest we can get to
> >> model things like real hardware FWIW) was that hotplug would work on the
> >> s390 level rather than pci. I don't see how your first proposal fixes that.
> >>
> >> Also Michael, PCI on s390 is very very special.
> > 
> > Yes, I'm trying to wrap my head around it all.
> > And is there hotplug support there on real hardware?
> 
> I quite frankly don't know. Frank?
>
yes there is, but it might be different than you expect from traditional
PCI :-)
 
> >> You can't plug in
> >> anything that does not come from IBM. There are no PCIe connectors -
> >> instead you have IBM proprietary slots that only work with IBM approved
> >> devices. So things like "we can plug in a PCI bridge" simply don't work
> >> as well in that world.
> >>
> >>
> >> Alex
> > 
> > But interestingly, the usage example that Frank gave actually shows
> > e1000 and other non-IBM cards apparently working?
> > This kind of confuses me.
Sorry, I just use e1000 to give an example everybody can follow. Here is
the actual story:

1) In reality the only pci card I could get hands on was a mlx4
2) Linux kernel on s390 does not support fancy io stuff like VGA, USB,
   sound, ... in general
3) on s390 PCI has some restrictions like: no traditional IRQ (MSI/-X only),
   no IO BAR, no port IO, ...

For this you can attach an emulated card (like e1000) to qemu and the guest
can list it via lspci, but the card would not be able to work. At the end
I had 2 working setups:

1) virtio-pci-net
2) vfio (pass-through with mlx4 backend on the host)

> 
> This is probably just by coincidence / luck. IIRC there are only 1 or 2
> different cards you can buy from IBM that were ever tested on real
> hardware. But then again there is no BOCHS VGA adapter in real hardware
> either, yet it seems to work everywhere (except s390) ;). So the fact
> that e1000 works is not incredibly surprising, yet I wouldn't rely on it.
> 
> 
> Alex
>
Since I'm leaving the company in a couple of days I will not be able to bring
this to an end. That's why I want to handover the discussion to my
colleague Hong Bo Li now. He worked on s390 kvm/qemu pci development for more
than 1 year now and should be able to seamless take-over.

Thx Hong Bo!

Please give him the same kind support and help you provided to me. Also I want
to say thank-you and goodby. It was a pleasure to work with all of you ...

Hope to see you again one day,

Frank 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]