qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [virtio-dev] Vhost-pci RFC2.0


From: Jason Wang
Subject: Re: [Qemu-devel] [virtio-dev] Vhost-pci RFC2.0
Date: Fri, 5 May 2017 12:05:06 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0



On 2017年04月19日 14:38, Wang, Wei W wrote:
Hi,
We made some design changes to the original vhost-pci design, and want to open a discussion about the latest design (labelled 2.0) and its extension (2.1).
2.0 design: One VM shares the entire memory of another VM
2.1 design: One VM uses an intermediate memory shared with another VM for
                     packet transmission.
For the convenience of discussion, I have some pictures presented at this link:
_https://github.com/wei-w-wang/vhost-pci-discussion/blob/master/vhost-pci-rfc2.0.pdf_

Hi, is there any doc or pointer that describes the the design in detail? E.g patch 4 in v1 https://lists.gnu.org/archive/html/qemu-devel/2016-05/msg05163.html.

Thanks

Fig. 1 shows the common driver frame that we want use to build the 2.0 and 2.1
design. A TX/RX engine consists of a local ring and an exotic ring.
Local ring:
1) allocated by the driver itself;
2) registered with the device (i.e. virtio_add_queue())
Exotic ring:
1) ring memory comes from the outside (of the driver), and exposed to the driver
     via a BAR MMIO;
2) does not have a registration in the device, so no ioeventfd/irqfd, configuration
registers allocated in the device
Fig. 2 shows how the driver frame is used to build the 2.0 design.
1) Asymmetric: vhost-pci-net <-> virtio-net
2) VM1 shares the entire memory of VM2, and the exotic rings are the rings
    from VM2.
3) Performance (in terms of copies between VMs):
    TX: 0-copy (packets are put to VM2’s RX ring directly)
    RX: 1-copy (the green arrow line in the VM1’s RX engine)
Fig. 3 shows how the driver frame is used to build the 2.1 design.
1) Symmetric: vhost-pci-net <-> vhost-pci-net
2) Share an intermediate memory, allocated by VM1’s vhost-pci device,
for data exchange, and the exotic rings are built on the shared memory
3) Performance:
    TX: 1-copy
RX: 1-copy
Fig. 4 shows the inter-VM notification path for 2.0 (2.1 is similar).
The four eventfds are allocated by virtio-net, and shared with vhost-pci-net:
Uses virtio-net’s TX/RX kickfd as the vhost-pci-net’s RX/TX callfd
Uses virtio-net’s TX/RX callfd as the vhost-pci-net’s RX/TX kickfd
Example of how it works:
After packets are put into vhost-pci-net’s TX, the driver kicks TX, which
causes the an interrupt associated with fd3 to be injected to virtio-net
The draft code of the 2.0 design is ready, and can be found here:
Qemu: _https://github.com/wei-w-wang/vhost-pci-device_
Guest driver: _https://github.com/wei-w-wang/vhost-pci-driver_
We tested the 2.0 implementation using the Spirent packet
generator to transmit 64B packets, the results show that the
throughput of vhost-pci reaches around 1.8Mpps, which is around
two times larger than the legacy OVS+DPDK.

Does this mean OVS+DPDK can only have ~0.9Mpps? A little bit surprise that the number looks rather low (I can get similar result if I use kernel bridge).

Thanks

Also, vhost-pci shows
better scalability than OVS+DPDK.
Best,
Wei




reply via email to

[Prev in Thread] Current Thread [Next in Thread]