qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH] vfio: VFIO PCI driver for Qemu


From: Avi Kivity
Subject: Re: [Qemu-devel] [RFC PATCH] vfio: VFIO PCI driver for Qemu
Date: Thu, 26 Jul 2012 18:59:16 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:13.0) Gecko/20120615 Thunderbird/13.0.1

On 07/26/2012 05:56 PM, Alex Williamson wrote:
>> >> Let's use the same syntax as for kvm device assignment.  Then we can
>> >> fall back on kvm when vfio is not available.  We can also have an
>> >> optional parameter kernel-driver to explicitly select vfio or kvm.
>> > 
>> > This seems confusing to me, pci-assign already has options like
>> > prefer_msi, share_intx, and configfd that vfio doesn't.  I'm sure vfio
>> > will eventually get options that pci-assign won't have.  How is a user
>> > supposed to figure out what options are actually available from -device
>> > pci-assign,? 
>> 
>> Read the documentation.
> 
> And libvirt is supposed to parse the qemu-docs package matching the
> installed qemu binary package to figure out what's supported?

I was hoping that we could avoid any change in libvirt.

> 
>> > Isn't this the same as asking to drop all model specific
>> > devices and just use -device net,model=e1000... hey, we've been there
>> > before ;)  Thanks,
>> 
>> It's not.  e1000 is a guest visible feature. vfio and kvm assignment do
>> exactly the same thing, as far as the guest is concerned, just using a
>> different driver.  This is more akin to -device virtio-net,vhost=on|off
>> (where we also have a default and a fallback, which wouldn't make sense
>> for model=e1000).
> 
> I understand an agree with your desire to make this transparent from the
> user perspective, but I think the place to do that abstraction is
> libvirt.  The qemu command line is just the final step in a process that
> already needs to be aware of which backend will be used.  This is not
> simply a small tweak to the qemu options and now I'm using vfio.  It
> goes something like this:
> 
>    KVM                                     VFIO
> 1. Identify the assigned device         1. Identify the assigned device
> 2. Unbind from host driver              2. Identify the iommu group for the 
> device
> 3. Bind to pci-stub                     3. Evaluate all the devices for the 
> group
> 4. Launch qemu                          4. Unbind all devices in the group 
> from host drivers
>                                         5. Bind all devices in the group to 
> vfio-pci
>                                         6. Launch qemu

In the common case, on x86 (but I'm repeating myself), the iommu group
includes just one device, yes?  Could we make pci-stub an alias for the
corresponding vfio steps?

Though I generally dislike doing magic behind the user's back.  qemu and
even more the kernel are low level interfaces and should behave as
regularly as possible.

> 
> I've actually already had a report from an early adopter that did
> everything under the VFIO list on the right, but but happened to be
> using qemu-kvm and the -device pci-assign option and couldn't figure out
> what was going on.  Due to KVM's poor device ownership model, it was
> more than happy to bind to a device owned by vfio-pci.  Imagine the
> support questions we have to ask if we support both via pci-assign;

In fact we had the same experience with kvm being enabled or not.  We
have 'info kvm' for that.

> well, what version of qemu are you using and does that default to vfio
> or kvm assignment or has the distro modified it to switch the default...
> VFIO offers certain advantages, for instance correctly managing the
> IOMMU domain on systems like Andreas' where KVM can't manage the domain
> of the bridge because it doesn't understand grouping.  There are also
> obvious advantages in the device ownership model.  Users want to be sure
> they're using these things.
> 
> Both KVM and VFIO do strive to make the device in the guest look as much
> like it does on bare metal as possible, but we don't guarantee they're
> identical and we don't guarantee to match each other.  So in fact, we
> can expect subtle difference in how the guest sees it.  Things like the
> capabilities exposed, the emulation/virtualization of some of those
> capabilities, eventually things like express config space support and
> AER error propagation.  These are all a bit more than "add vhost=on to
> your virtio-net-pci options and magically your networking is faster".

I see.  Thanks for the explanation.


-- 
error compiling committee.c: too many arguments to function





reply via email to

[Prev in Thread] Current Thread [Next in Thread]