qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [virtio-dev] [PATCH v3 0/7] Vhost-pci for inter-VM comm


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [virtio-dev] [PATCH v3 0/7] Vhost-pci for inter-VM communication
Date: Fri, 8 Dec 2017 16:15:43 +0000

On Fri, Dec 8, 2017 at 2:27 PM, Michael S. Tsirkin <address@hidden> wrote:
> On Fri, Dec 08, 2017 at 06:08:05AM +0000, Stefan Hajnoczi wrote:
>> On Thu, Dec 7, 2017 at 11:54 PM, Michael S. Tsirkin <address@hidden> wrote:
>> > On Thu, Dec 07, 2017 at 06:28:19PM +0000, Stefan Hajnoczi wrote:
>> >> On Thu, Dec 7, 2017 at 5:38 PM, Michael S. Tsirkin <address@hidden> wrote:
>> >> > On Thu, Dec 07, 2017 at 05:29:14PM +0000, Stefan Hajnoczi wrote:
>> >> >> On Thu, Dec 7, 2017 at 4:47 PM, Michael S. Tsirkin <address@hidden> 
>> >> >> wrote:
>> >> >> > On Thu, Dec 07, 2017 at 04:29:45PM +0000, Stefan Hajnoczi wrote:
>> >> >> >> On Thu, Dec 7, 2017 at 2:02 PM, Michael S. Tsirkin <address@hidden> 
>> >> >> >> wrote:
>> >> >> >> > On Thu, Dec 07, 2017 at 01:08:04PM +0000, Stefan Hajnoczi wrote:
>> >>
>> >> > Besides, this means implementing iotlb in both qemu and guest.
>> >>
>> >> It's free in the guest, the libvhost-user stack already has it.
>> >
>> > That library is designed to work with a unix domain socket
>> > though. We'll need extra kernel code to make a device
>> > pretend it's a socket.
>>
>> A kernel vhost-pci driver isn't necessary because I don't think there
>> are in-kernel users.
>>
>> A vfio vhost-pci backend can go alongside the UNIX domain socket
>> backend that exists today in libvhost-user.
>>
>> If we want to expose kernel vhost devices via vhost-pci then a
>> libvhost-user program can translate the vhost-user protocol into
>> kernel ioctls.  For example:
>> $ vhost-pci-proxy --vhost-pci-addr 00:04.0 --vhost-fd 3 3<>/dev/vhost-net
>>
>> The vhost-pci-proxy implements the vhost-user protocol callbacks and
>> submits ioctls on the vhost kernel device fd.  I haven't compared the
>> kernel ioctl interface vs the vhost-user protocol to see if everything
>> maps cleanly though.
>>
>> Stefan
>
> I don't really like this, it's yet another package to install, yet
> another process to complicate debugging and yet another service that can
> go down.
>
> Maybe vsock can do the trick though?

Not sure what you have in mind.

An in-kernel vhost-pci driver is possible too.  The neatest
integration would be alongside drivers/vhost/vhost.c so that existing
net, scsi, vsock vhost drivers can work with vhost-pci.  Userspace
still needs to configure the devices and associate them with a
vhost-pci instance (e.g. which vhost-pci device should be a vhost_scsi
target and the SCSI target configuration).  But I think this approach
is more work than the vhost-pci-proxy program I've described.

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]