qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/6] Misc PCI cleanups


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH 0/6] Misc PCI cleanups
Date: Thu, 11 Oct 2012 16:11:07 +0200

On Mon, Oct 08, 2012 at 05:54:58PM -0600, Alex Williamson wrote:
> On Tue, 2012-10-09 at 00:44 +0200, Michael S. Tsirkin wrote:
> > On Mon, Oct 08, 2012 at 04:09:48PM -0600, Alex Williamson wrote:
> > > On Mon, 2012-10-08 at 23:50 +0200, Michael S. Tsirkin wrote:
> > > > On Mon, Oct 08, 2012 at 03:11:37PM -0600, Alex Williamson wrote:
> > > > > On Mon, 2012-10-08 at 23:40 +0200, Michael S. Tsirkin wrote:
> > > > > > On Mon, Oct 08, 2012 at 01:27:33PM -0600, Alex Williamson wrote:
> > > > > > > On Mon, 2012-10-08 at 22:15 +0200, Michael S. Tsirkin wrote:
> > > > > > > > On Mon, Oct 08, 2012 at 09:58:32AM -0600, Alex Williamson wrote:
> > > > > > > > > Michael, Jan,
> > > > > > > > > 
> > > > > > > > > Any comments on these?  I'd like to make the PCI changes 
> > > > > > > > > before I update
> > > > > > > > > vfio-pci to make use of the new resampling irqfd in kvm.  We 
> > > > > > > > > don't have
> > > > > > > > > anyone officially listed as maintainer of pci-assign since 
> > > > > > > > > it's been
> > > > > > > > > moved to qemu.  I could include the pci-assign patches in my 
> > > > > > > > > tree if you
> > > > > > > > > prefer.  Thanks,
> > > > > > > > > 
> > > > > > > > > Alex
> > > > > > > > 
> > > > > > > > Patches themselves look fine, but I'd like to
> > > > > > > > better understand why do we want the INTx fallback.
> > > > > > > > Isn't it easier to add intx routing support?
> > > > > > > 
> > > > > > > vfio-pci can work with or without intx routing support.  Its 
> > > > > > > presence is
> > > > > > > just one requirement to enable kvm accelerated intx support.  
> > > > > > > Regardless
> > > > > > > of whether it's easy or hard to implement intx routing in a given
> > > > > > > chipset, I currently can't probe for it and make useful decisions 
> > > > > > > about
> > > > > > > whether or not to enable kvm support without potentially hitting 
> > > > > > > an
> > > > > > > assert.  It's arguable how important intx acceleration is for 
> > > > > > > specific
> > > > > > > applications, so while I'd like all chipsets to implement it, I 
> > > > > > > don't
> > > > > > > know that it should be a gating factor to chipset integration.  
> > > > > > > Thanks,
> > > > > > > 
> > > > > > > Alex
> > > > > > 
> > > > > > Yes but there's nothing kvm specific in the routing API,
> > > > > > and IIRC it actually works fine without kvm.
> > > > > 
> > > > > Correct, but intx routing isn't very useful without kvm.
> > > > > 
> > > > > > As I see it, if some chipset does not expose it, it's a bug, and the
> > > > > > reason for lack of support is because no one cares about supporting
> > > > > > device assignment there.
> > > > > 
> > > > > Should we not have a more robust response to bugs than to kill the VM?
> > > > > Especially when it's as trivial as using the non-accelerated intx mode
> > > > > (vfio-pci) or having device init fail (pci-assign).  Calling assert is
> > > > > lazy.
> > > > 
> > > > This API has nothing to do with acceleration that I can see.
> > > 
> > > Ok, let's build on that...
> > > 
> > > > > > So this API is not something devices should probe for.
> > > > > > How about just assuming it works?
> > > > > 
> > > > > If that's the case, why test for any capabilities?  Let's just assume
> > > > > everything is there and litter the code with asserts rather than
> > > > > robustly deal with errors. </sarcasm>
> > > > > 
> > > > > > Otherwise, you are adding code and API that will become dead code
> > > > > > when everyone supports the required API.
> > > > > 
> > > > > So q35 is going to be the last ever chipset?  It may be the next one 
> > > > > to
> > > > > implement intx routing, but hopefully not the last.
> > > > 
> > > > Everyone should just implement same APIs.  This was exactly why I asked
> > > > Jan to change routing so everyone does the same thing, instead of this
> > > > bolted-on hack just for assignment.
> > > 
> > > Do you have any idea how to implement intx routing on a g3beige
> > > platform?  Me neither, but vfio-pci works just fine on it if I don't
> > > call pci_device_route_intx_to_irq on that platform.  I could test kvm
> > > support before calling it, but we both agree that intx routing isn't
> > > tied to kvm support, same for kvm irq chip support.  Now I'm in a pickle
> > > of what can I test for to see if intx routing is in place that won't
> > > break some day if we run g3beige on a ppc host w/ kvm acceleration, and
> > > heaven forbid that it should ever implement something we call an
> > > irqchip.
> > > 
> > > > I merged this anyway so we can stop maintaining two trees which is even
> > > > more painful.
> > > > 
> > > > > We're talking about this:
> > > > > 
> > > > >  hw/pci.c |    8 ++++++--
> > > > >  hw/pci.h |    1 +
> > > > >  2 files changed, 7 insertions(+), 2 deletions(-)
> > > > > 
> > > > > Is this somehow tipping us over the edge and creating an 
> > > > > unmaintainable
> > > > > API?
> > > > 
> > > > 
> > > > One has to draw a line in the sand somewhere, and I think it's a good
> > > > idea to stop now before this metastazes all over the code.
> > > 
> > > I think the API is broken w/o this, so...
> > > 
> > > > > Does it somehow prevent a driver from being just as lazy and doing
> > > > > assert(intx.mode == PCI_INTX_NOROUTE)?
> > > > 
> > > > What does this return code mean? How can there be no route?
> > > 
> > > The function returns PCIINTxRoute, so I can't very well return -1 or
> > > -ENODEV if we don't know how to generate the route.  Obviously the route
> > > exists somewhere in the depths of qemu_irq, but we don't know how to get
> > > it.  Therefore, we return No Route.
> > 
> > In other words it means there's a chipset bug. Why would
> > a device want to assert on this? Let chipset assert where
> > there's a chance to debug it.
> 
> Is it a chipset bug to support a feature that has no value on that
> platform?

> Is it useful to assert on a known unimplemented feature?  I
> say no to both.

The point all along was a temporary measure to catch missing code.
You are basically saying this API has no value without
level irqfd support in kvm. vfio should just limit this call to that
condition.  And then when people add kvm level irqfd on ppc, and this
assert suddenly starts triggering, it will serve its intended purpose
and alert us that this API needs to be implemented for level irqfd to
acutally serve its intended purpose.


> > >  It's far more obvious than trying
> > > to figure out what INVERTED means.
> > > 
> > > > > The API is obviously not
> > > > > complete as is if I have to assume it's there, call into it and hope 
> > > > > for
> > > > > the best.  Thanks,
> > > > > 
> > > > > Alex
> > > > 
> > > > Why can't all chipsets implement this? You are just working
> > > > around broken chipsets in your device, this is wrong.
> > > 
> > > Like I asked, do you know how to make intx route work on g3beige or
> > > define a test that doesn't pull in unrelated features to avoid calling
> > > it there?  Implementing intx routing on a chipset that has no hopes of
> > > qemu bypass is wasted effort,
> > > therefore we need an API that allows us to
> > > test it.  Thanks,
> > > 
> > > Alex
> > > 
> > 
> > And this is why intx API is a wrong thing to keep long term.
> > We end up with code like
> >     if (fast thing works) {
> >             do fast thing
> >     } else
> >             do slow thing
> > in many devices which is not where I want the complexity
> > to be, I want to keep it in the core.
> >
> > I plan to rework this, to make all devices do something like what Avi's
> > memory API does, flattening the hierarchy at registration time, and
> > prefer not to add more dead code that will need to be removed.
> 
> While I agree qemu_irq needs a re-write, I have a hard time seeing
> anything other than device assignment that needs this particular
> feature, so I don't think there are "many devices" that will be doing
> the above.  Even if there were, it's pretty common to test features and
> enable fast paths based on availability.

Yes but this API has nothing to do with fast path. It is simply
missing code, assert is there exactly so we can catch it.

>  So long as there are easy ways
> to test both paths, which there are with vfio-pci, I don't see a problem
> with that.
> > Is there any rush to use vfio on g3beige?  If yes I could merge
> > something like this patch as a temporary measure but I prefer not to.
> 
> Yes, I already use it as a non-x86 test platform and I'd like to not
> break it or create tests on non-related features to avoid intx routing
> code.  I'd also like to enable vfio-pci on any other platforms that
> support PCI and don't want to deal with this on every chipset.  Thanks,
> 
> Alex

Actually if (kvm_irqchip && kvm_level_irq) looks more or less
exactly what you need ATM. I would go for this for now,
I understand you think this is ugly but alternatives look ugly too.


-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]