qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v4 1/3] Mediated device Core driver


From: Alex Williamson
Subject: Re: [Qemu-devel] [RFC PATCH v4 1/3] Mediated device Core driver
Date: Thu, 26 May 2016 08:06:07 -0600

On Thu, 26 May 2016 14:33:39 +0530
Kirti Wankhede <address@hidden> wrote:

> Thanks Alex.
> 
> I'll consider all the nits and fix those in next version of patch.
> 
> More below:
> 
> On 5/26/2016 4:09 AM, Alex Williamson wrote:
> > On Wed, 25 May 2016 01:28:15 +0530
> > Kirti Wankhede <address@hidden> wrote:
> >  
> 
> ...
> 
> >> +
> >> +config MDEV
> >> +    tristate "Mediated device driver framework"
> >> +    depends on VFIO
> >> +    default n
> >> +    help
> >> +        MDEV provides a framework to virtualize device without  
> SR-IOV cap
> >> +        See Documentation/mdev.txt for more details.  
> >
> > I don't see that file anywhere in this series.  
> 
> Yes, missed this file in this patch. I'll add it in next version of patch.
> Since mdev module is moved in vfio directory, should I place this file
> in vfio directory, Documentation/vfio/mdev.txt? or keep documentation of
> mdev module within vfio.txt itself?

Maybe just call it vfio-mediated-device.txt

> >> +  if (phy_dev) {
> >> +          mutex_lock(&phy_devices.list_lock);
> >> +
> >> +          /*
> >> +          * If vendor driver doesn't return success that means vendor
> >> +          * driver doesn't support hot-unplug
> >> +          */
> >> +          if (phy_dev->ops->destroy) {
> >> +                  if (phy_dev->ops->destroy(phy_dev->dev, mdevice->uuid,
> >> +                                            mdevice->instance)) {
> >> +                          mutex_unlock(&phy_devices.list_lock);
> >> +                          return;
> >> +                  }
> >> +          }
> >> +
> >> +          mdev_remove_attribute_group(&mdevice->dev,
> >> +                                      phy_dev->ops->mdev_attr_groups);
> >> +          mdevice->phy_dev = NULL;
> >> +          mutex_unlock(&phy_devices.list_lock);  
> >
> > Locking here appears arbitrary, how does the above code interact with
> > phy_devices.dev_list?
> >  
> 
> Sorry for not being clear about phy_devices.list_lock, probably I
> shouldn't have named it 'list_lock'. This lock is also to synchronize
> register_device & unregister_device and physical device specific
> callbacks: supported_config, create, destroy, start and shutdown.
> Although supported_config, create and destroy are per phy_device
> specific callbacks while start and shutdown could refer to multiple
> phy_devices indirectly when there are multiple mdev devices of same type
> on different physical devices. There could be race condition in start
> callback and destroy & unregister_device. I'm revisiting this lock again
> and will see to use per phy device lock for phy_device specific callbacks.
> 
> 
> >> +struct mdev_device {
> >> +  struct kref             kref;
> >> +  struct device           dev;
> >> +  struct phy_device       *phy_dev;
> >> +  struct iommu_group      *group;
> >> +  void                    *iommu_data;
> >> +  uuid_le                 uuid;
> >> +  uint32_t                instance;
> >> +  void                    *driver_data;
> >> +  struct mutex            ops_lock;
> >> +  struct list_head        next;
> >> +};  
> >
> > Could this be in the private header?  Seems like this should be opaque
> > outside of mdev core.
> >  
> 
> No, this structure is used in mediated device call back functions to
> vendor driver so that vendor driver could identify mdev device, similar
> to pci_dev structure in pci bus subsystem. (I'll remove kref which is
> not being used at all.)

Personally I'd prefer to see more use of reference counting and less
locking, especially since the locking is mostly ineffective in this
version.

> >> + * @read:         Read emulation callback
> >> + *                        @mdev: mediated device structure
> >> + *                        @buf: read buffer
> >> + *                        @count: number bytes to read
> >> + *                        @address_space: specifies for which address
> >> + *                        space the request is: pci_config_space, IO
> >> + *                        register space or MMIO space.  
> >
> > Seems like I asked before and it's no more clear in the code, how do we
> > handle multiple spaces for various types?  ie. a device might have
> > multiple MMIO spaces.
> >  
> >> + *                        @pos: offset from base address.  
> 
> Sorry, updated the code but missed to update comment here.
> pos = base_address + offset
> (its not 'pos' anymore, will rename it to addr)
> 
> so vendor driver is aware about base addresses of multiple MMIO spaces
> and its size, they can identify MMIO space based on addr.

Why not let the vendor driver provide vfio_region_info directly,
including the offset within the device file descriptor thedn the
mediated device core simply pass read/write through without caring what
the address space is?  Thanks,

Alex
 
> >> +/*
> >> + * Physical Device
> >> + */
> >> +struct phy_device {
> >> +  struct device                   *dev;
> >> +  const struct phy_device_ops     *ops;
> >> +  struct list_head                next;
> >> +};  
> >
> > I would really like to be able to use the mediated device interface to
> > create a purely virtual device, is the expectation that my physical
> > device interface would create a virtual struct device which would
> > become the parent and control point in sysfs for creating all the mdev
> > devices? Should we be calling this a host_device or mdev_parent_dev in
> > that case since there's really no requirement that it be a physical
> > device?  
> 
> Makes sense. I'll rename it to parent_device.
> 
> Thanks,
> Kirti.
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]