qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [virtio-dev] [RFC 0/3] Extend vhost-user to support VFI


From: Jason Wang
Subject: Re: [Qemu-devel] [virtio-dev] [RFC 0/3] Extend vhost-user to support VFIO based accelerators
Date: Thu, 4 Jan 2018 15:21:55 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.5.0



On 2018年01月04日 14:18, Tiwei Bie wrote:
On Wed, Jan 03, 2018 at 10:34:36PM +0800, Jason Wang wrote:
On 2017年12月22日 14:41, Tiwei Bie wrote:
This RFC patch set does some small extensions to vhost-user protocol
to support VFIO based accelerators, and makes it possible to get the
similar performance of VFIO passthru while keeping the virtio device
emulation in QEMU.

When we have virtio ring compatible devices, it's possible to setup
the device (DMA mapping, PCI config, etc) based on the existing info
(memory-table, features, vring info, etc) which is available on the
vhost-backend (e.g. DPDK vhost library). Then, we will be able to
use such devices to accelerate the emulated device for the VM. And
we call it vDPA: vhost DataPath Acceleration. The key difference
between VFIO passthru and vDPA is that, in vDPA only the data path
(e.g. ring, notify and queue interrupt) is pass-throughed, the device
control path (e.g. PCI configuration space and MMIO regions) is still
defined and emulated by QEMU.

The benefits of keeping virtio device emulation in QEMU compared
with virtio device VFIO passthru include (but not limit to):

- consistent device interface from guest OS;
- max flexibility on control path and hardware design;
- leveraging the existing virtio live-migration framework;

But the critical issue in vDPA is that the data path performance is
relatively low and some host threads are needed for the data path,
because some necessary mechanisms are missing to support:

1) guest driver notifies the device directly;
2) device interrupts the guest directly;

So this patch set does some small extensions to vhost-user protocol
to make both of them possible. It leverages the same mechanisms (e.g.
EPT and Posted-Interrupt on Intel platform) as the VFIO passthru to
achieve the data path pass through.

A new protocol feature bit is added to negotiate the accelerator feature
support. Two new slave message types are added to enable the notify and
interrupt passthru for each queue. From the view of vhost-user protocol
design, it's very flexible. The passthru can be enabled/disabled for
each queue individually, and it's possible to accelerate each queue by
different devices. More design and implementation details can be found
from the last patch.

There are some rough edges in this patch set (so this is a RFC patch
set for now), but it's never too early to hear the thoughts from the
community! So any comments and suggestions would be really appreciated!

Tiwei Bie (3):
    vhost-user: support receiving file descriptors in slave_read
    vhost-user: introduce shared vhost-user state
    vhost-user: add VFIO based accelerators support

   docs/interop/vhost-user.txt     |  57 ++++++
   hw/scsi/vhost-user-scsi.c       |   6 +-
   hw/vfio/common.c                |   2 +-
   hw/virtio/vhost-user.c          | 430 
+++++++++++++++++++++++++++++++++++++++-
   hw/virtio/vhost.c               |   3 +-
   hw/virtio/virtio-pci.c          |   8 -
   hw/virtio/virtio-pci.h          |   8 +
   include/hw/vfio/vfio.h          |   2 +
   include/hw/virtio/vhost-user.h  |  43 ++++
   include/hw/virtio/virtio-scsi.h |   6 +-
   net/vhost-user.c                |  30 +--
   11 files changed, 561 insertions(+), 34 deletions(-)
   create mode 100644 include/hw/virtio/vhost-user.h

I may miss something, but may I ask why you must implement them through
vhost-use/dpdk. It looks to me you could put all of them in qemu which could
simplify a lots of things (just like userspace NVME driver wrote by Fam).

Thanks for your comments! :-)

Yeah, you're right. We can also implement everything in QEMU
like the userspace NVME driver by Fam. It was also described
by Cunming on the KVM Forum 2017. Below is the link to the
slides:

https://events.static.linuxfound.org/sites/events/files/slides/KVM17%27-vDPA.pdf

Thanks for the pointer. Looks rather interesting.


We're also working on it (including defining a standard device
for vhost data path acceleration based on mdev to hide vendor
specific details).

This is exactly what I mean. Form my point of view, there's no need for any extension for vhost protocol, we just need to reuse qemu iothread to implement a userspace vhost dataplane and do the mdev inside that thread.


And IMO it's also not a bad idea to extend vhost-user protocol
to support the accelerators if possible. And it could be more
flexible because it could support (for example) below things
easily without introducing any complex command line options or
monitor commands to QEMU:

Maybe I was wrong but I don't think we care about the complexity of command line or monitor command in this case.


- the switching among different accelerators and software version
   can be done at runtime in vhost process;
- use different accelerators to accelerate different queue pairs
   or just accelerate some (instead of all) queue pairs;

Well, technically, if we want, these could be implemented in qemu too.

And here's some more advantages if you implement it in qemu:

1) Avoid extra dependency like dpdk
2) More flexible, mdev could even choose to not use VFIO or not depend on vDPA 3) More efficient guest IOMMU integration especially for dynamic mappings (device IOTLB transactions could be done by function calls instead of slow UDP messages)
4) Zerocopy (for non intel vDPA) is more easier to be implemented
5) Compare to vhost-user, tightly coupled with device emulation can simplify lots of things (an example is programmable flow director/RSS implementation). And any future enhancement to virtio does not need to introduce new type of vhost-user messages.

I don't object vhost-user/dpdk method but I second for implementing all the stuffs in qemu.

Thanks


Best regards,
Tiwei Bie





reply via email to

[Prev in Thread] Current Thread [Next in Thread]