qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [virtio-dev] [PATCH v3 2/7] vhost-pci-net: add vhost-pc


From: Cornelia Huck
Subject: Re: [Qemu-devel] [virtio-dev] [PATCH v3 2/7] vhost-pci-net: add vhost-pci-net
Date: Tue, 5 Dec 2017 18:00:10 +0100

On Tue, 5 Dec 2017 18:53:29 +0200
"Michael S. Tsirkin" <address@hidden> wrote:

> On Tue, Dec 05, 2017 at 04:41:54PM +0000, Stefan Hajnoczi wrote:
> > On Tue, Dec 05, 2017 at 05:55:45PM +0200, Michael S. Tsirkin wrote:  
> > > On Tue, Dec 05, 2017 at 02:59:50PM +0000, Stefan Hajnoczi wrote:  
> > > > On Tue, Dec 05, 2017 at 11:33:11AM +0800, Wei Wang wrote:  
> > > > > Add the vhost-pci-net device emulation. The device uses bar 2 to 
> > > > > expose
> > > > > the remote VM's memory to the guest. The first 4KB of the the bar area
> > > > > stores the metadata which describes the remote memory and vring info. 
> > > > >  
> > > > 
> > > > This device looks like the beginning of a new "vhost-pci" virtio device
> > > > type.  There are layering violations:
> > > > 
> > > > 1. This has nothing to do with virtio-net or networking, it's purely
> > > >    vhost-pci.  Why is it called vhost-pci-net instead of vhost-pci?
> > > > 
> > > > 2. VirtIODevice does not know about PCI.  It should work over virtio-ccw
> > > >    or virtio-mmio.  This patch talks about BARs inside a VirtIODevice so
> > > >    there is a problem here.  
> > > 
> > > I think the point is how memory is exposed to another guest.  This
> > > device exposes it as a pci bar. I don't think e.g. ccw can do this,
> > > it's all hypercall-based.  
> > 
> > Yes, that's why the BAR issue needs to be discussed.
> > 
> > In terms of the patches, the clean way to do it is for the
> > vhost-pci device to have a memory region that is not called "BAR".  The
> > virtio-pci transport can expose it as a BAR but the device doesn't need
> > to know about it.  Other transports that support memory mapping could
> > then work with this device too.  
> 
> True, though mmio is pretty much a legacy transport at this point
> at least from qemu perspective as arm devs don't seem to be working
> on virtio 1.0 support in qemu. So I am not sure how much
> of a priority should transport isolation be.

I currently don't see an easy way to make this work via ccw, FWIW. We
would need a dedicated mechanism for it, and I'm not sure what the gain
would be.

> 
> > The VIRTIO specification needs to capture this transport requirement
> > somehow too so it's clear that the vhost device can only run over
> > transports that support memory mapping.
> > 
> > That said, it's not clear to me why the vhost-pci device is a VIRTIO
> > device.  It doesn't use virtqueues or the configuration space.  It only
> > uses the vhost-user chardev and the mapped memory.  Isn't it better to
> > make it a PCI device?
> > 
> > Stefan  
> 
> Seems similar enough to me, except The roles of device and driver are
> reversed here.
> 

But will anything other than pci ever make use of this?



reply via email to

[Prev in Thread] Current Thread [Next in Thread]