qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v3 1/3] vGPU Core driver


From: Tian, Kevin
Subject: Re: [Qemu-devel] [RFC PATCH v3 1/3] vGPU Core driver
Date: Thu, 5 May 2016 08:58:54 +0000

> From: Alex Williamson [mailto:address@hidden
> Sent: Thursday, May 05, 2016 12:57 AM
> 
> On Wed, 4 May 2016 02:45:59 +0000
> "Tian, Kevin" <address@hidden> wrote:
> 
> > > From: Alex Williamson
> > > Sent: Wednesday, May 04, 2016 6:44 AM
> > >
> > > > diff --git a/drivers/vgpu/Kconfig b/drivers/vgpu/Kconfig
> > > > new file mode 100644
> > > > index 0000000..792eb48
> > > > --- /dev/null
> > > > +++ b/drivers/vgpu/Kconfig
> > > > @@ -0,0 +1,21 @@
> > > > +
> > > > +menuconfig VGPU
> > > > +    tristate "VGPU driver framework"
> > > > +    depends on VFIO
> > > > +    select VGPU_VFIO
> > > > +    help
> > > > +        VGPU provides a framework to virtualize GPU without SR-IOV cap
> > > > +        See Documentation/vgpu.txt for more details.
> > > > +
> > > > +        If you don't know what do here, say N.
> > > > +
> > > > +config VGPU
> > > > +    tristate
> > > > +    depends on VFIO
> > > > +    default n
> > > > +
> > > > +config VGPU_VFIO
> > > > +    tristate
> > > > +    depends on VGPU
> > > > +    default n
> > > > +
> > >
> > > This is a little bit convoluted, it seems like everything added in this
> > > patch is vfio agnostic, it doesn't necessarily care what the consumer
> > > is.  That makes me think we should only be adding CONFIG_VGPU here and
> > > it should not depend on CONFIG_VFIO or be enabling CONFIG_VGPU_VFIO.
> > > The middle config entry is also redundant to the first, just move the
> > > default line up to the first and remove the rest.
> >
> > Agree. Removing such dependency also benefits other hypervisor if
> > VFIO is not used.
> >
> > Alex, there is one idea which I'd like to hear your comment. When looking at
> > the whole series, we can see the majority logic (maybe I cannot say 100%)
> > is GPU agnostic. Same frameworks in VFIO and vGPU core are actually neutral
> > to underlying device type, which e.g. can be easily applied to a NIC card 
> > too
> > if a similar technology is developed there.
> >
> > Do you think whether we'd better make framework not GPU specific now
> > (mostly naming change), or continue current style and change later only
> > when there is a real implementation on a different device?
> 
> Yeah, I see that too and I made a bunch of comments in patch 3 that
> we're not doing anything vGPU specific and we should be careful about
> assuming the user for the various interfaces.  In patch 1, we are
> fairly v/GPU specific because we're dealing with how vGPUs are created
> from the physical GPU.  Maybe the interface is general, maybe it's not,
> it's hard to say.  Starting with patch 2 though, we really shouldn't
> know or care what the device is beyond a PCI compatible device.  We're
> just trying to create a vfio bus driver compatible with vfio-pci and
> offload enough generic operations so that we don't need to pass
> everything back to the vendor driver.  Patch 3 of course should be
> completely device agnostic, we should only care that the vfio backend
> provides mediation of the device, so an iommu is not required.  It may
> be too much of a rathole to try to completely generalize the interface
> at this point, but let's certainly try not to let vgpu specific ideas
> spread beyond where we need.  Thanks,
> 

Even for patch 1 current implementation can apply to any PCI device
if we just replace vgpu with another name. There is nothing (or minimal)
being real GPU specific. But I'm not strong on this point, since somehow
I agree w/o the 2nd actual user some abstractions here may only make
sense for vgpu. So... just raise this thought and hear your comments. :-)

btw a curious question. I know you Alex can give final call to VFIO specific
code. What about vGPU core framework? It creates a new category under
driver directory (drivers/vgpu). Who else is required to review and ack
that part?

Thanks
Kevin




reply via email to

[Prev in Thread] Current Thread [Next in Thread]