qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] vhost-pci and virtio-vhost-user


From: Jason Wang
Subject: Re: [Qemu-devel] vhost-pci and virtio-vhost-user
Date: Mon, 15 Jan 2018 14:56:31 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.5.0



On 2018年01月12日 18:18, Stefan Hajnoczi wrote:
Form what I'm understanding, vhost-pci tries to build a scalable V2V private
datapath. But according to what you describe here, virito-vhost-user tries
to make it possible to implement the device inside another VM. I understand
the goal of vhost-pci could be done on top, but it looks to me it would then
rather similar to the design of Xen driver domain. So I can not figure out
how it can be done in a high performance way.
vhost-pci and virtio-vhost-user both have the same goal.  They allow
a VM to implement a vhost device (net, scsi, blk, etc).
Looks not, if I read the code correctly, vhost-pci has a device
implementation in qemu, and in slave VM it only have a vhost-pci-net driver.
You are right that the current "[PATCH v3 0/7] Vhost-pci for inter-VM
communication" does not reach this goal yet.  The patch series focusses
on a subset of vhost-user-net for poll mode drivers.

But the goal is to eventually let VMs implement any vhost device type.
Even if Wei, you, or I don't implement scsi, for example, someone else
should be able to do it based on vhost-pci or virtio-vhost-user.

Wei: Do you agree?

This allows
software defined network or storage appliances running inside a VM to
provide I/O services to other VMs.
Well, I think we can do it even with the existed virtio or whatever other
emulated device which should not be bounded to any specific kind of device.
Please explain the approach you have in mind.

I just fail understand why we can't do software defined network or storage with exist virtio device/drivers (or are there any shortcomings that force us to invent new infrastructure).


And what's more important, according to the kvm 2016 slides of vhost-pci,
the motivation of vhost-pci is not building SDN but a chain of VNFs. So
bypassing the central vswitch through a private VM2VM path does make sense.
(Though whether or not vhost-pci is the best choice is still questionable).
This is probably my fault.  Maybe my networking terminology is wrong.  I
consider "virtual network functions" to be part of "software-defined
networking" use cases.  I'm not implying there must be a central virtual
switch.

To rephrase: vhost-pci enables exitless VM2VM communication.

The problem is, exitless is not what vhost-pci invents, it could be achieved now when both sides are doing busypolling.


   To the other VMs the devices look
like regular virtio devices.

I'm not sure I understand your reference to the Xen driver domain or
performance.
So what proposed here is basically memory sharing and event notification
through eventfd, this model have been used by Xen for many years through
grant table and event channel. Xen use this to move the backend
implementation from dom0 to a driver domain which has direct access to some
hardwares. Consider the case of network, it can then implement xen netback
inside driver domain which can access hardware NIC directly.

This makes sense for Xen and for performance since driver domain (backend)
can access hardware directly and event was triggered through lower overhead
hypercall (or it can do busypolling). But for virtio-vhost-user, unless you
want SRIOV based solutions inside the slave VM, I believe we won't want to
go back to Xen since the hardware virtualization can bring extra overheads.
Okay, this point is about the NFV use case.  I can't answer that because
I'm not familiar with it.

Even if the NFV use case is not ideal for VMs, there are many other use
cases for VMs implementing vhost devices.  In the cloud the VM is the
first-class object that users can manage.  They do not have the ability
to run vhost-user processes on the host.  Therefore I/O appliances need
to be able to run as VMs and vhost-pci (or virtio-vhost-user) solve that
problem.

The question is why must use vhost-user? E.g in the case of SDN, you can easily deploy an OVS instance with openflow inside a VM and it works like a charm.


   Both vhost-pci and virtio-vhost-user work using shared
memory access to the guest RAM of the other VM.  Therefore they can poll
virtqueues and avoid vmexit.  They do also support cross-VM interrupts,
thanks to QEMU setting up irqfd/ioeventfd appropriately on the host.

Stefan
So in conclusion, consider the complexity, I would suggest to figure out
whether or not this (either vhost-pci or virito-vhost-user) is really
required before moving ahead. E.g, for VM2VM direct network path, this looks
simply an issue of network topology instead of the problem of device, so
there's a lot of trick, for vhost-user one can easily image to write an
application (or use testpmd) to build a zerocopied VM2VM datapath, isn't
this not sufficient for the case?
See above, I described the general cloud I/O appliance use case.

Stefan

So I understand vhost-user could be used to build I/O appliance. What I don't understand is, the advantages of using vhost-user or why we must use it inside a guest.

Thanks



reply via email to

[Prev in Thread] Current Thread [Next in Thread]