[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhos
From: |
Jason Wang |
Subject: |
Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication |
Date: |
Fri, 19 May 2017 17:53:07 +0800 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0 |
On 2017年05月19日 17:00, Wei Wang wrote:
On 05/19/2017 11:10 AM, Jason Wang wrote:
On 2017年05月18日 11:03, Wei Wang wrote:
On 05/17/2017 02:22 PM, Jason Wang wrote:
On 2017年05月17日 14:16, Jason Wang wrote:
On 2017年05月16日 15:12, Wei Wang wrote:
Hi:
Care to post the driver codes too?
OK. It may take some time to clean up the driver code before post
it out. You can first
have a check of the draft at the repo here:
https://github.com/wei-w-wang/vhost-pci-driver
Best,
Wei
Interesting, looks like there's one copy on tx side. We used to
have zerocopy support for tun for VM2VM traffic. Could you please
try to compare it with your vhost-pci-net by:
We can analyze from the whole data path - from VM1's network stack
to send packets -> VM2's
network stack to receive packets. The number of copies are actually
the same for both.
That's why I'm asking you to compare the performance. The only reason
for vhost-pci is performance. You should prove it.
vhost-pci: 1-copy happen in VM1's driver xmit(), which copes packets
from its network stack to VM2's
RX ring buffer. (we call it "zerocopy" because there is no
intermediate copy between VMs)
zerocopy enabled vhost-net: 1-copy happen in tun's recvmsg, which
copies packets from VM1's TX ring
buffer to VM2's RX ring buffer.
Actually, there's a major difference here. You do copy in guest which
consumes time slice of vcpu thread on host. Vhost_net do this in its
own thread. So I feel vhost_net is even faster here, maybe I was wrong.
The code path using vhost_net is much longer - the Ping test shows
that the zcopy based vhost_net reports around 0.237ms,
while using vhost-pci it reports around 0.06 ms.
For some environment issue, I can report the throughput number later.
Yes, vhost-pci should have better latency by design. But we should
measure pps or packet size other than 64 as well. I agree vhost_net has
bad latency, but this does not mean it could not be improved (just
because few people are working on improve this in the past), especially
we know the destination is another VM.
That being said, we compared to vhost-user, instead of vhost_net,
because vhost-user is the one
that is used in NFV, which we think is a major use case for vhost-pci.
If this is true, why not draft a pmd driver instead of a kernel one?
Yes, that's right. There are actually two directions of the vhost-pci
driver implementation - kernel driver
and dpdk pmd. The QEMU side device patches are first posted out for
discussion, because when the device
part is ready, we will be able to have the related team work on the
pmd driver as well. As usual, the pmd
driver would give a much better throughput.
I think pmd should be easier for a prototype than kernel driver.
So, I think at this stage we should focus on the device part review,
and use the kernel driver to prove that
the device part design and implementation is reasonable and functional.
Probably both.
And do you use virtio-net kernel driver to compare the performance?
If yes, has OVS dpdk optimized for kernel driver (I think not)?
We used the legacy OVS+DPDK.
Another thing with the existing OVS+DPDK usage is its centralization
property. With vhost-pci, we will be able to
de-centralize the usage.
Right, so I think we should prove:
- For usage, prove or make vhost-pci better than existed share memory
based solution. (Or is virtio good at shared memory?)
- For performance, prove or make vhost-pci better than existed
centralized solution.
What's more important, if vhost-pci is faster, I think its kernel
driver should be also faster than virtio-net, no?
Sorry about the confusion. We are actually not trying to use vhost-pci
to replace virtio-net. Rather, vhost-pci
can be viewed as another type of backend for virtio-net to be used in
NFV (the communication channel is
vhost-pci-net<->virtio_net).
My point is performance number is important for proving the correctness
for both design and engineering. If its slow, it has less interesting in
NFV.
Thanks
Best,
Wei
- [Qemu-devel] [PATCH v2 16/16] vl: enable vhost-pci-slave, (continued)
- [Qemu-devel] [PATCH v2 16/16] vl: enable vhost-pci-slave, Wei Wang, 2017/05/12
- Re: [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication, no-reply, 2017/05/12
- Re: [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication, Jason Wang, 2017/05/16
- Re: [Qemu-devel] [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication, Wei Wang, 2017/05/16
- Re: [Qemu-devel] [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication, Jason Wang, 2017/05/17
- Re: [Qemu-devel] [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication, Jason Wang, 2017/05/17
- Re: [Qemu-devel] [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication, Wei Wang, 2017/05/17
- Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication, Jason Wang, 2017/05/18
- Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication, Wei Wang, 2017/05/19
- Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication,
Jason Wang <=
- Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication, Michael S. Tsirkin, 2017/05/19
- Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication, Wei Wang, 2017/05/23
- Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication, Michael S. Tsirkin, 2017/05/23
- Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication, Stefan Hajnoczi, 2017/05/19
- Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication, Jason Wang, 2017/05/21
- Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication, Wang, Wei W, 2017/05/22
- Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication, Jason Wang, 2017/05/22
- Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication, Wei Wang, 2017/05/23
- Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication, Jason Wang, 2017/05/23
- Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication, Wei Wang, 2017/05/23