qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [virtio-dev] Vhost-pci RFC2.0


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [virtio-dev] Vhost-pci RFC2.0
Date: Tue, 2 May 2017 13:48:04 +0100
User-agent: Mutt/1.8.0 (2017-02-23)

On Thu, Apr 20, 2017 at 01:51:24PM +0800, Wei Wang wrote:
> On 04/19/2017 11:24 PM, Stefan Hajnoczi wrote:
> > On Wed, Apr 19, 2017 at 11:42 AM, Wei Wang <address@hidden> wrote:
> > > On 04/19/2017 05:57 PM, Stefan Hajnoczi wrote:
> > > > On Wed, Apr 19, 2017 at 06:38:11AM +0000, Wang, Wei W wrote:
> > > > > We made some design changes to the original vhost-pci design, and 
> > > > > want to
> > > > > open
> > > > > a discussion about the latest design (labelled 2.0) and its extension
> > > > > (2.1).
> > > > > 2.0 design: One VM shares the entire memory of another VM
> > > > > 2.1 design: One VM uses an intermediate memory shared with another VM 
> > > > > for
> > > > >                        packet transmission.
> > > > Hi,
> > > > Can you talk a bit about the motivation for the 2.x design and major
> > > > changes compared to 1.x?
> > > 
> > > 1.x refers to the design we presented at KVM Form before. The major
> > > change includes:
> > > 1) inter-VM notification support
> > > 2) TX engine and RX engine, which is the structure built in the driver. 
> > > From
> > > the device point of view, the local rings of the engines need to be
> > > registered.
> > It would be great to support any virtio device type.
> 
> Yes, the current design already supports the creation of devices of
> different types.
> The support is added to the vhost-user protocol and the vhost-user slave.
> Once the slave handler receives the request to create the device (with
> the specified device type), the remaining process (e.g. device realize)
> is device specific.
> This part remains the same as presented before
> (i.e.Page 12 @ 
> http://www.linux-kvm.org/images/5/55/02x07A-Wei_Wang-Design_of-Vhost-pci.pdf).
> > 
> > The use case I'm thinking of is networking and storage appliances in
> > cloud environments (e.g. OpenStack).  vhost-user doesn't fit nicely
> > because users may not be allowed to run host userspace processes.  VMs
> > are first-class objects in compute clouds.  It would be natural to
> > deploy networking and storage appliances as VMs using vhost-pci.
> > 
> > In order to achieve this vhost-pci needs to be a virtio transport and
> > not a virtio-net-specific PCI device.  It would extend the VIRTIO 1.x
> > spec alongside virtio-pci, virtio-mmio, and virtio-ccw.
> 
> Actually it is designed as a device under virtio-pci transport. I'm
> not sure about the value of having a new transport.
> 
> > When you say TX and RX I'm not sure if the design only supports
> > virtio-net devices?
> 
> Current design focuses on the vhost-pci-net device. That's the
> reason that we have TX/RX here. As mention above, when the
> slave invokes the device creation function, the execution
> goes to each device specific code.
> 
> The TX/RX is the design after the device creation, so it is specific
> to vhost-pci-net. For the future vhost-pci-blk, that design can
> have its own request queue.

Here is my understanding based on your vhost-pci GitHub repo:

VM1 sees a normal virtio-net-pci device.  VM1 QEMU is invoked with a
vhost-user netdev.

VM2 sees a hotplugged vhost-pci-net virtio-pci device once VM1
initializes the device and a message is sent over vhost-user.

There is no integration with Linux drivers/vhost/ code for VM2.  Instead
you are writing a 3rd virtio-net driver specifically for vhost-pci.  Not
sure if it's possible to reuse drivers/vhost/ cleanly but that would be
nicer than implementing virtio-net again.

Is the VM1 vhost-user netdev a normal vhost-user device or does it know
about vhost-pci?

It's hard to study code changes in your vhost-pci repo because
everything (QEMU + Linux + your changes) was committed in a single
commit.  Please keep your changes in separate commits so it's easy to
find them.

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]