qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [virtio-dev] [PATCH v3 0/7] Vhost-pci for inter-VM comm


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [virtio-dev] [PATCH v3 0/7] Vhost-pci for inter-VM communication
Date: Mon, 11 Dec 2017 11:11:47 +0000
User-agent: Mutt/1.9.1 (2017-09-22)

On Sat, Dec 09, 2017 at 04:23:17PM +0000, Wang, Wei W wrote:
> On Friday, December 8, 2017 4:34 PM, Stefan Hajnoczi wrote:
> > On Fri, Dec 8, 2017 at 6:43 AM, Wei Wang <address@hidden> wrote:
> > > On 12/08/2017 07:54 AM, Michael S. Tsirkin wrote:
> > >>
> > >> On Thu, Dec 07, 2017 at 06:28:19PM +0000, Stefan Hajnoczi wrote:
> > >>>
> > >>> On Thu, Dec 7, 2017 at 5:38 PM, Michael S. Tsirkin <address@hidden>
> > > Thanks Stefan and Michael for the sharing and discussion. I think
> > > above 3 and 4 are debatable (e.g. whether it is simpler really
> > > depends). 1 and 2 are implementations, I think both approaches could
> > > implement the device that way. We originally thought about one device
> > > and driver to support all types (called it transformer sometimes :-)
> > > ), that would look interesting from research point of view, but from
> > > real usage point of view, I think it would be better to have them 
> > > separated,
> > because:
> > > - different device types have different driver logic, mixing them
> > > together would cause the driver to look messy. Imagine that a
> > > networking driver developer has to go over the block related code to
> > > debug, that also increases the difficulty.
> > 
> > I'm not sure I understand where things get messy because:
> > 1. The vhost-pci device implementation in QEMU relays messages but has no
> > device logic, so device-specific messages like VHOST_USER_NET_SET_MTU are
> > trivial at this layer.
> > 2. vhost-user slaves only handle certain vhost-user protocol messages.
> > They handle device-specific messages for their device type only.  This is 
> > like
> > vhost drivers today where the ioctl() function returns an error if the 
> > ioctl is
> > not supported by the device.  It's not messy.
> > 
> > Where are you worried about messy driver logic?
> 
> Probably I didn’t explain well, please let me summarize my thought a little 
> bit, from the perspective of the control path and data path.
> 
> Control path: the vhost-user messages - I would prefer just have the 
> interaction between QEMUs, instead of relaying to the GuestSlave, because
> 1) I think the claimed advantage (easier to debug and develop) doesn’t seem 
> very convincing

You are defining a mapping from the vhost-user protocol to a custom
virtio device interface.  Every time the vhost-user protocol (feature
bits, messages, etc) is extended it will be necessary to map this new
extension to the virtio device interface.

That's non-trivial.  Mistakes are possible when designing the mapping.
Using the vhost-user protocol as the device interface minimizes the
effort and risk of mistakes because most messages are relayed 1:1.

> 2) some messages can be directly answered by QemuSlave , and some messages 
> are not useful to give to the GuestSlave (inside the VM), e.g. fds, 
> VhostUserMemoryRegion from SET_MEM_TABLE msg (the device first maps the 
> master memory and gives the offset (in terms of the bar, i.e., where does it 
> sit in the bar) of the mapped gpa to the guest. if we give the raw 
> VhostUserMemoryRegion to the guest, that wouldn’t be usable).

I agree that QEMU has to handle some of messages, but it should still
relay all (possibly modified) messages to the guest.

The point of using the vhost-user protocol is not just to use a familiar
binary encoding, it's to match the semantics of vhost-user 100%.  That
way the vhost-user software stack can work either in host userspace or
with vhost-pci without significant changes.

Using the vhost-user protocol as the device interface doesn't seem any
harder than defining a completely new virtio device interface.  It has
the advantages that I've pointed out:

1. Simple 1:1 mapping for most that is easy to maintain as the
   vhost-user protocol grows.

2. Compatible with vhost-user so slaves can run in host userspace
   or the guest.

I don't see why it makes sense to define new device interfaces for each
device type and create a software stack that is incompatible with
vhost-user.

> 
> 
> Data path: that's the discussion we had about one driver or separate driver 
> for different device types, and this is not related to the control path.
> I meant if we have one driver for all the types, that driver would look 
> messy, because each type has its own data sending/receiving logic. For 
> example, net type deals with a pair of tx and rx, and transmission is skb 
> based (e.g. xmit_skb), while block type deals with a request queue. If we 
> have one driver, then the driver will include all the things together.

I don't understand this.  Why would we have to put all devices (net,
scsi, etc) into just one driver?  The device drivers sit on top of the
vhost-pci driver.

For example, imagine a libvhost-user application that handles the net
device.  The vhost-pci vfio driver would be part of libvhost-user and
the application would only emulate the net device (RX and TX queues).

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]