qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v6 1/4] vfio: Mediated device Core driver


From: Neo Jia
Subject: Re: [Qemu-devel] [PATCH v6 1/4] vfio: Mediated device Core driver
Date: Mon, 15 Aug 2016 16:54:23 -0700
User-agent: Mutt/1.6.2 (2016-07-01)

On Mon, Aug 15, 2016 at 04:47:41PM -0600, Alex Williamson wrote:
> On Mon, 15 Aug 2016 12:59:08 -0700
> Neo Jia <address@hidden> wrote:
> 
> > On Mon, Aug 15, 2016 at 09:38:52AM +0000, Tian, Kevin wrote:
> > > > From: Kirti Wankhede [mailto:address@hidden
> > > > Sent: Saturday, August 13, 2016 8:37 AM
> > > > 
> > > > 
> > > > 
> > > > On 8/13/2016 2:46 AM, Alex Williamson wrote:  
> > > > > On Sat, 13 Aug 2016 00:14:39 +0530
> > > > > Kirti Wankhede <address@hidden> wrote:
> > > > >  
> > > > >> On 8/10/2016 12:30 AM, Alex Williamson wrote:  
> > > > >>> On Thu, 4 Aug 2016 00:33:51 +0530
> > > > >>> Kirti Wankhede <address@hidden> wrote:
> > > > >>>
> > > > >>> This is used later by mdev_device_start() and mdev_device_stop() to 
> > > > >>> get
> > > > >>> the parent_device so it can call the start and stop ops callbacks
> > > > >>> respectively.  That seems to imply that all of instances for a given
> > > > >>> uuid come from the same parent_device.  Where is that enforced?  I'm
> > > > >>> still having a hard time buying into the uuid+instance plan when it
> > > > >>> seems like each mdev_device should have an actual unique uuid.
> > > > >>> Userspace tools can figure out which uuids to start for a given 
> > > > >>> user, I
> > > > >>> don't see much value in collecting them to instances within a uuid.
> > > > >>>  
> > > > >>
> > > > >> Initially we started discussion with VM_UUID+instance suggestion, 
> > > > >> where
> > > > >> instance was introduced to support multiple devices in a VM.  
> > > > >
> > > > > The instance number was never required in order to support multiple
> > > > > devices in a VM, IIRC this UUID+instance scheme was to appease NVIDIA
> > > > > management tools which wanted to re-use the VM UUID by creating vGPU
> > > > > devices with that same UUID and therefore associate udev events to a
> > > > > given VM.  Only then does an instance number become necessary since 
> > > > > the
> > > > > UUID needs to be static for a vGPUs within a VM.  This has always felt
> > > > > like a very dodgy solution when we should probably just be querying
> > > > > libvirt to give us a device to VM association.  
> > > 
> > > Agree with Alex here. We'd better not assume that UUID will be a VM_UUID
> > > for mdev in the basic design. It's bound to NVIDIA management stack too 
> > > tightly.
> > > 
> > > I'm OK to give enough flexibility for various upper level management 
> > > stacks,
> > > e.g. instead of $UUID+INDEX style, would $UUID+STRING provide a better
> > > option where either UUID or STRING could be optional? Upper management 
> > > stack can choose its own policy to identify a mdev:
> > > 
> > > a) $UUID only, so each mdev is allocated with a unique UUID
> > > b) STRING only, which could be an index (0, 1, 2, ...), or any 
> > > combination 
> > > (vgpu0, vgpu1, etc.)
> > > c) $UUID+STRING, where UUID could be a VM UUID, and STRING could be
> > > a numeric index
> > >   
> > > > >  
> > > > >> 'mdev_create' creates device and 'mdev_start' is to commit resources 
> > > > >> of
> > > > >> all instances of similar devices assigned to VM.
> > > > >>
> > > > >> For example, to create 2 devices:
> > > > >> # echo "$UUID:0:params" > /sys/devices/../mdev_create
> > > > >> # echo "$UUID:1:params" > /sys/devices/../mdev_create
> > > > >>
> > > > >> "$UUID-0" and "$UUID-1" devices are created.
> > > > >>
> > > > >> Commit resources for above devices with single 'mdev_start':
> > > > >> # echo "$UUID" > /sys/class/mdev/mdev_start
> > > > >>
> > > > >> Considering $UUID to be a unique UUID of a device, we don't need
> > > > >> 'instance', so 'mdev_create' would look like:
> > > > >>
> > > > >> # echo "$UUID1:params" > /sys/devices/../mdev_create
> > > > >> # echo "$UUID2:params" > /sys/devices/../mdev_create
> > > > >>
> > > > >> where $UUID1 and $UUID2 would be mdev device's unique UUID and 
> > > > >> 'params'
> > > > >> would be vendor specific parameters.
> > > > >>
> > > > >> Device nodes would be created as "$UUID1" and "$UUID"
> > > > >>
> > > > >> Then 'mdev_start' would be:
> > > > >> # echo "$UUID1, $UUID2" > /sys/class/mdev/mdev_start
> > > > >>
> > > > >> Similarly 'mdev_stop' and 'mdev_destroy' would be:
> > > > >>
> > > > >> # echo "$UUID1, $UUID2" > /sys/class/mdev/mdev_stop  
> > > > >
> > > > > I'm not sure a comma separated list makes sense here, for both
> > > > > simplicity in the kernel and more fine grained error reporting, we
> > > > > probably want to start/stop them individually.  Actually, why is it
> > > > > that we can't use the mediated device being opened and released to
> > > > > automatically signal to the backend vendor driver to commit and 
> > > > > release
> > > > > resources? I don't fully understand why userspace needs this 
> > > > > interface.  
> > > 
> > > There is a meaningful use of start/stop interface, as required in live
> > > migration support. Such interface allows vendor driver to quiescent 
> > > mdev activity on source device before mdev hardware state is snapshot,
> > > and then resume mdev activity on dest device after its state is recovered.
> > > Intel has implemented experimental live migration support in KVMGT (soon
> > > to release), based on above two interfaces (plus another two to get/set
> > > mdev state).
> > >   
> > > > >  
> > > > 
> > > > For NVIDIA vGPU solution we need to know all devices assigned to a VM in
> > > > one shot to commit resources of all vGPUs assigned to a VM along with
> > > > some common resources.  
> > > 
> > > Kirti, can you elaborate the background about above one-shot commit
> > > requirement? It's hard to understand such a requirement. 
> > > 
> > > As I relied in another mail, I really hope start/stop become a per-mdev
> > > attribute instead of global one, e.g.:
> > > 
> > > echo "0/1" > /sys/class/mdev/12345678-1234-1234-1234-123456789abc/start
> > > 
> > > In many scenario the user space client may only want to talk to mdev
> > > instance directly, w/o need to contact its parent device. Still take
> > > live migration for example, I don't think Qemu wants to know parent
> > > device of assigned mdev instances.  
> > 
> > Hi Kevin,
> > 
> > Having a global /sys/class/mdev/mdev_start doesn't require anybody to know
> > parent device. you can just do 
> > 
> > echo "mdev_UUID" > /sys/class/mdev/mdev_start
> > 
> > or 
> > 
> > echo "mdev_UUID" > /sys/class/mdev/mdev_stop
> > 
> > without knowing the parent device.
> 
> That doesn't give an individual user the ability to stop and start
> their devices though, because in order for a user to have write
> permissions there, they get permission to DoS other users by pumping
> arbitrary UUIDs into those files.  By placing start/stop per mdev, we
> have mdev level granularity of granting start/stop privileges.  Really
> though, do we want QEMU fumbling around through sysfs or do we want an
> interface through the vfio API to perform start/stop?  Thanks,

Hi Alex,

With the current sysfs proposal, I don't think QEMU needs to do anything
regards to manage virtual device lifecycle. It is part of the upper layer
management stack who is responsible to create virtual device and get
ready for consumers like QEMU.

In terms of the VFIO API, I assume we will basically loop through all virtual
devices inside QEMU vfio/pci.c for example, and call the start or stop
individually, right?

If that is the case, it doesn't change much from the current design about how to
handle the "one-short" start requirement, and we are adding more dependency with
VFIO API.

So, I think we probably should just focus on the sysfs and make sure such
interface work for us.

Thanks,
Neo

> 
> Alex



reply via email to

[Prev in Thread] Current Thread [Next in Thread]