qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [virtio-dev] Vhost-pci RFC2.0


From: Wei Wang
Subject: Re: [Qemu-devel] [virtio-dev] Vhost-pci RFC2.0
Date: Thu, 20 Apr 2017 13:51:24 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0

On 04/19/2017 11:24 PM, Stefan Hajnoczi wrote:
On Wed, Apr 19, 2017 at 11:42 AM, Wei Wang <address@hidden> wrote:
On 04/19/2017 05:57 PM, Stefan Hajnoczi wrote:
On Wed, Apr 19, 2017 at 06:38:11AM +0000, Wang, Wei W wrote:
We made some design changes to the original vhost-pci design, and want to
open
a discussion about the latest design (labelled 2.0) and its extension
(2.1).
2.0 design: One VM shares the entire memory of another VM
2.1 design: One VM uses an intermediate memory shared with another VM for
                       packet transmission.
Hi,
Can you talk a bit about the motivation for the 2.x design and major
changes compared to 1.x?

1.x refers to the design we presented at KVM Form before. The major
change includes:
1) inter-VM notification support
2) TX engine and RX engine, which is the structure built in the driver. From
the device point of view, the local rings of the engines need to be
registered.
It would be great to support any virtio device type.

Yes, the current design already supports the creation of devices of
different types.
The support is added to the vhost-user protocol and the vhost-user slave.
Once the slave handler receives the request to create the device (with
the specified device type), the remaining process (e.g. device realize)
is device specific.
This part remains the same as presented before
(i.e.Page 12 @ http://www.linux-kvm.org/images/5/55/02x07A-Wei_Wang-Design_of-Vhost-pci.pdf).

The use case I'm thinking of is networking and storage appliances in
cloud environments (e.g. OpenStack).  vhost-user doesn't fit nicely
because users may not be allowed to run host userspace processes.  VMs
are first-class objects in compute clouds.  It would be natural to
deploy networking and storage appliances as VMs using vhost-pci.

In order to achieve this vhost-pci needs to be a virtio transport and
not a virtio-net-specific PCI device.  It would extend the VIRTIO 1.x
spec alongside virtio-pci, virtio-mmio, and virtio-ccw.

Actually it is designed as a device under virtio-pci transport. I'm
not sure about the value of having a new transport.

When you say TX and RX I'm not sure if the design only supports
virtio-net devices?

Current design focuses on the vhost-pci-net device. That's the
reason that we have TX/RX here. As mention above, when the
slave invokes the device creation function, the execution
goes to each device specific code.

The TX/RX is the design after the device creation, so it is specific
to vhost-pci-net. For the future vhost-pci-blk, that design can
have its own request queue.


The motivation is to build a common design for 2.0 and 2.1.

What is the relationship between 2.0 and 2.1?  Do you plan to upstream
both?
2.0 and 2.1 use different ways to share memory.

2.0: VM1 shares the entire memory of VM2, which achieves 0 copy
between VMs while being less secure.
2.1: VM1 and VM2 use an intermediate shared memory to transmit
packets, which results in 1 copy between VMs while being more secure.

Yes, plan to upstream both. Since the difference is the way to share memory,
I think it wouldn't have too many patches to upstream 2.1 if 2.0 is ready
(or
changing the order if needed).
Okay.  "Asymmetric" (vhost-pci <-> virtio-pci) and "symmetric"
(vhost-pci <-> vhost-pci) mode might be a clearer way to distinguish
between the two.  Or even "compatibility" mode and "native" mode since
existing guests only work in vhost-pci <-> virtio-pci mode.  Using
version numbers to describe two different modes of operation could be
confusing.

OK. I'll take your suggestion to use "asymmetric" and
"asymmetric". Thanks.


Best,
Wei





reply via email to

[Prev in Thread] Current Thread [Next in Thread]