qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [virtio-dev] Re: [PATCH v2 0/6] Extend vhost-user to su


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [virtio-dev] Re: [PATCH v2 0/6] Extend vhost-user to support VFIO based accelerators
Date: Thu, 29 Mar 2018 07:16:14 +0300

On Thu, Mar 29, 2018 at 11:33:29AM +0800, Tiwei Bie wrote:
> On Wed, Mar 28, 2018 at 06:33:01PM +0300, Michael S. Tsirkin wrote:
> > On Wed, Mar 28, 2018 at 08:24:07PM +0800, Tiwei Bie wrote:
> > > > > Update notes
> > > > > ============
> > > > > 
> > > > > IOMMU feature bit check is removed in this version, because:
> > > > > 
> > > > > The IOMMU feature is negotiable, when an accelerator is used and
> > > > > it doesn't support virtual IOMMU, its driver just won't provide
> > > > > this feature bit when vhost library querying its features. And if
> > > > > it supports the virtual IOMMU, its driver can provide this feature
> > > > > bit. It's not reasonable to add this limitation in this patch set.
> > > > 
> > > > Fair enough. Still:
> > > > Can hardware on intel platforms actually support IOTLB requests?
> > > > Don't you need to add support for vIOMMU shadowing instead?
> > > > 
> > > 
> > > For the hardware I have, I guess they can't for now.
> > 
> > So VFIO in QEMU has support for vIOMMU shadowing.
> > Can you use that somehow?
> 
> Yeah, I guess we can use it in some way. Actually supporting
> vIOMMU is a quite interesting feature. It would provide
> better security, and for the hardware backend case there
> would be no performance penalty with static mapping after
> the backend got all the mappings. I think it could be done
> as another work. Based on your previous suggestion in this
> thread, I have split the guest notification offload and host
> notification offload (I'll send the new version very soon).
> And I plan to let this patch set just focus on fixing the
> most critical performance issue - the host notification offload.
> With this fix, using hardware backend in vhost-user could get
> a very big performance boost and become much more practicable.
> So maybe we can focus on fixing this critical performance issue
> first. How do you think?

I think correctness and security go first before performance.
vIOMMU goes under security.

> > 
> > Ability to run dpdk within guest seems important.
> 
> I think vIOMMU isn't a must to run DPDK in guest.

Oh yes it is.

> For Linux
> guest we also have igb_uio and uio_pci_generic to run DPDK,
> for FreeBSD guest we have nic_uio.

These hacks offer no protection from a buggy userspace corrupting guest
kernel memory. Given DPDK is routinely linked into closed source
applications, this is not a configuration anyone can support.


> They don't need vIOMMU,
> and they could offer the best performance.
> 
> Best regards,
> Tiwei Bie
> 
> > 
> > -- 
> > MST
> > 
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: address@hidden
> > For additional commands, e-mail: address@hidden
> > 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]