qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] vhost-pci and virtio-vhost-user


From: Jason Wang
Subject: Re: [Qemu-devel] vhost-pci and virtio-vhost-user
Date: Tue, 16 Jan 2018 13:33:13 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.5.0



On 2018年01月15日 18:43, Wei Wang wrote:
On 01/15/2018 04:34 PM, Jason Wang wrote:


On 2018年01月15日 15:59, Wei Wang wrote:
On 01/15/2018 02:56 PM, Jason Wang wrote:


On 2018年01月12日 18:18, Stefan Hajnoczi wrote:


I just fail understand why we can't do software defined network or storage with exist virtio device/drivers (or are there any shortcomings that force us to invent new infrastructure).


Existing virtio-net works with a host central vSwitch, and it has the following disadvantages:
1) long code/data path;
2) poor scalability; and
3) host CPU sacrifice

Please show me the numbers.

Sure. For 64B packet transmission between two VMs: vhost-user reports ~6.8Mpps, and vhost-pci reports ~11Mpps, which is ~1.62x faster.


This result is kind of incomplete. So still many questions left:

- What's the configuration of the vhost-user?
- What's the result of e.g 1500 byte?
- You said it improves scalability, at least I can't get this conclusion just from what you provide here
- You suspect long code/data path, but no latency numbers to prove it




Vhost-pci solves the above issues by providing a point-to-point communication between VMs. No matter how the control path would look like finally, the key point is that the data path is P2P between VMs.

Best,
Wei



Well, I think I've pointed out several times in the replies of previous versions. Both vhost-pci-net and virtio-net is an ethernet device, which is not tied to a central vswitch for sure. There're just too many methods or tricks which can be used to build a point to point data path.


Could you please show an existing example that makes virtio-net work without a host vswitch/bridge?

For vhost-user, it's as simple as a testpmd which does io forwarding between two vhost ports? For kernel, you can do even more tricks, tc, bpf or whatever others.

Could you also share other p2p data path solutions that you have in mind? Thanks.


Best,
Wei


So my point stands still: both vhost-pci-net and virtio-net are ethernet devices, any ethernet device can connect to each other directly without switch. Saying virtio-net can not connect to each other directly without a switch obviously make no sense, it's a network topology issue for sure. Even if it was not a typical setup or configuration, extending the exist backends is 1st choice unless you can prove there're any design limitations of exist solutions.

Thanks



reply via email to

[Prev in Thread] Current Thread [Next in Thread]