qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v3 1/3] vGPU Core driver


From: Tian, Kevin
Subject: Re: [Qemu-devel] [RFC PATCH v3 1/3] vGPU Core driver
Date: Thu, 5 May 2016 12:07:11 +0000

> From: Kirti Wankhede [mailto:address@hidden
> Sent: Thursday, May 05, 2016 6:45 PM
> 
> 
> On 5/5/2016 2:36 PM, Tian, Kevin wrote:
> >> From: Kirti Wankhede
> >> Sent: Wednesday, May 04, 2016 9:32 PM
> >>
> >> Thanks Alex.
> >>
> >>  >> +config VGPU_VFIO
> >>  >> +    tristate
> >>  >> +    depends on VGPU
> >>  >> +    default n
> >>  >> +
> >>  >
> >>  > This is a little bit convoluted, it seems like everything added in this
> >>  > patch is vfio agnostic, it doesn't necessarily care what the consumer
> >>  > is.  That makes me think we should only be adding CONFIG_VGPU here and
> >>  > it should not depend on CONFIG_VFIO or be enabling CONFIG_VGPU_VFIO.
> >>  > The middle config entry is also redundant to the first, just move the
> >>  > default line up to the first and remove the rest.
> >>
> >> CONFIG_VGPU doesn't directly depend on VFIO. CONFIG_VGPU_VFIO is
> >> directly dependent on VFIO. But devices created by VGPU core module need
> >> a driver to manage those devices. CONFIG_VGPU_VFIO is the driver which
> >> will manage vgpu devices. So I think CONFIG_VGPU_VFIO should be enabled
> >> by CONFIG_VGPU.
> >>
> >> This would look like:
> >> menuconfig VGPU
> >>      tristate "VGPU driver framework"
> >>      select VGPU_VFIO
> >>      default n
> >>      help
> >>          VGPU provides a framework to virtualize GPU without SR-IOV cap
> >>          See Documentation/vgpu.txt for more details.
> >>
> >>          If you don't know what do here, say N.
> >>
> >> config VGPU_VFIO
> >>      tristate
> >>      depends on VGPU
> >>      depends on VFIO
> >>      default n
> >>
> >
> > There could be multiple drivers operating VGPU. Why do we restrict
> > it to VFIO here?
> >
> 
> VGPU_VFIO uses VFIO APIs, it depends on VFIO.
> I think since there is no other driver than VGPU_VFIO for VGPU devices,
> we should keep default selection of VGPU_VFIO on VGPU. May be in future
> if other driver is add ti operate vGPU devices, then default selection
> can be removed.

What's your plan to support Xen here?

> 
> >>  >> +create_attr_error:
> >>  >> +      if (gpu_dev->ops->vgpu_destroy) {
> >>  >> +              int ret = 0;
> >>  >> +              ret = gpu_dev->ops->vgpu_destroy(gpu_dev->dev,
> >>  >> +                                               vgpu_dev->uuid,
> >>  >> +                                               
> >> vgpu_dev->vgpu_instance);
> >>  >
> >>  > Unnecessary initialization and we don't do anything with the result.
> >>  > Below indicates lack of vgpu_destroy indicates the vendor doesn't
> >>  > support unplug, but doesn't that break our error cleanup path here?
> >>  >
> >>
> >> Comment about vgpu_destroy:
> >> If VM is running and vgpu_destroy is called that
> >> means the vGPU is being hotunpluged. Return
> >> error if VM is running and graphics driver
> >> doesn't support vgpu hotplug.
> >>
> >> Its GPU drivers responsibility to check if VM is running and return
> >> accordingly. This is vgpu creation path. Vgpu device would be hotplug to
> >> VM on vgpu_start.
> >
> > How does GPU driver know whether VM is running? VM is managed
> > by KVM here.
> >
> > Maybe it's clearer to say whether vGPU is busy which means some work
> > has been loaded to vGPU. That's something GPU driver can tell.
> >
> 
> GPU driver can detect based on resources allocated for the VM from
> vgpu_create/vgpu_start.

Yes, in that case GPU driver only knows a vGPU is in-use, not who is
using vGPU (now is VM, in the future it could be a container). Anyway
my point is just not assuming VM to add limitation when it's not necessary. :-)

> 
> >>
> >>  >> + * @vgpu_bar_info:            Called to get BAR size and flags of 
> >> vGPU device.
> >>  >> + *                            @vdev: vgpu device structure
> >>  >> + *                            @bar_index: BAR index
> >>  >> + *                            @bar_info: output, returns size and 
> >> flags of
> >>  >> + *                            requested BAR
> >>  >> + *                            Returns integer: success (0) or error 
> >> (< 0)
> >>  >
> >>  > This is called bar_info, but the bar_index is actually the vfio region
> >>  > index and things like the config region info is being overloaded
> >>  > through it.  We already have a structure defined for getting a generic
> >>  > region index, why not use it?  Maybe this should just be
> >>  > vgpu_vfio_get_region_info.
> >>  >
> >>
> >> Ok. Will do.
> >
> > As you commented earlier that GPU driver is required to provide config
> > space (which I agree), then what's the point of introducing another
> > bar specific structure? VFIO can use @write to get bar information
> > from vgpu config space, just like how it's done on physical device today.
> >
> 
> It is required not only for size, but also to fetch flags. Region flags
> could be combination of:
> 
> #define VFIO_REGION_INFO_FLAG_READ      (1 << 0) /* Region supports read */
> #define VFIO_REGION_INFO_FLAG_WRITE     (1 << 1) /* Region supports write */
> #define VFIO_REGION_INFO_FLAG_MMAP      (1 << 2) /* Region supports mmap */
> 
> Thanks,
> Kirti.

That makes sense.

Thanks
Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]