qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [virtio-dev] [PATCH v3 0/7] Vhost-pci for inter-VM comm


From: Wang, Wei W
Subject: Re: [Qemu-devel] [virtio-dev] [PATCH v3 0/7] Vhost-pci for inter-VM communication
Date: Sat, 9 Dec 2017 16:23:17 +0000

On Friday, December 8, 2017 4:34 PM, Stefan Hajnoczi wrote:
> On Fri, Dec 8, 2017 at 6:43 AM, Wei Wang <address@hidden> wrote:
> > On 12/08/2017 07:54 AM, Michael S. Tsirkin wrote:
> >>
> >> On Thu, Dec 07, 2017 at 06:28:19PM +0000, Stefan Hajnoczi wrote:
> >>>
> >>> On Thu, Dec 7, 2017 at 5:38 PM, Michael S. Tsirkin <address@hidden>
> > Thanks Stefan and Michael for the sharing and discussion. I think
> > above 3 and 4 are debatable (e.g. whether it is simpler really
> > depends). 1 and 2 are implementations, I think both approaches could
> > implement the device that way. We originally thought about one device
> > and driver to support all types (called it transformer sometimes :-)
> > ), that would look interesting from research point of view, but from
> > real usage point of view, I think it would be better to have them separated,
> because:
> > - different device types have different driver logic, mixing them
> > together would cause the driver to look messy. Imagine that a
> > networking driver developer has to go over the block related code to
> > debug, that also increases the difficulty.
> 
> I'm not sure I understand where things get messy because:
> 1. The vhost-pci device implementation in QEMU relays messages but has no
> device logic, so device-specific messages like VHOST_USER_NET_SET_MTU are
> trivial at this layer.
> 2. vhost-user slaves only handle certain vhost-user protocol messages.
> They handle device-specific messages for their device type only.  This is like
> vhost drivers today where the ioctl() function returns an error if the ioctl 
> is
> not supported by the device.  It's not messy.
> 
> Where are you worried about messy driver logic?

Probably I didn’t explain well, please let me summarize my thought a little 
bit, from the perspective of the control path and data path.

Control path: the vhost-user messages - I would prefer just have the 
interaction between QEMUs, instead of relaying to the GuestSlave, because
1) I think the claimed advantage (easier to debug and develop) doesn’t seem 
very convincing
2) some messages can be directly answered by QemuSlave , and some messages are 
not useful to give to the GuestSlave (inside the VM), e.g. fds, 
VhostUserMemoryRegion from SET_MEM_TABLE msg (the device first maps the master 
memory and gives the offset (in terms of the bar, i.e., where does it sit in 
the bar) of the mapped gpa to the guest. if we give the raw 
VhostUserMemoryRegion to the guest, that wouldn’t be usable).


Data path: that's the discussion we had about one driver or separate driver for 
different device types, and this is not related to the control path.
I meant if we have one driver for all the types, that driver would look messy, 
because each type has its own data sending/receiving logic. For example, net 
type deals with a pair of tx and rx, and transmission is skb based (e.g. 
xmit_skb), while block type deals with a request queue. If we have one driver, 
then the driver will include all the things together.


The last part is whether to make it a virtio device or a regular pci device
I don’t have a strong preference. I think virtio device works fine (e.g. use 
some bar area to create ioevenfds to solve the "no virtqueue no fds" issue if 
you and Michael think that's acceptable),  and we can reuse some other things 
like feature negotiation from virtio. But if Michael and you have a decision to 
make it a regular PCI device, I think that would also work though.

Best,
Wei

reply via email to

[Prev in Thread] Current Thread [Next in Thread]