qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] hw/misc: slavepci_passthru driver


From: Alex Williamson
Subject: Re: [Qemu-devel] [PATCH] hw/misc: slavepci_passthru driver
Date: Tue, 19 Jan 2016 08:51:22 -0700

On Tue, 2016-01-19 at 11:30 +0100, Francesco Zuliani wrote:
> Hi Alex,
> 
> 
> On 01/18/2016 05:41 PM, Alex Williamson wrote:
> > On Mon, 2016-01-18 at 10:16 -0500, Marc-André Lureau wrote:
> > > Hi
> > > 
> > > ----- Original Message -----
> > > > Hi there,
> > > > 
> > > > I'd like to submit this new pci driver ( hw/misc )for inclusion,
> > > > if you think it could be useful to other as well as ourself.
> > > > 
> > > > The driver "worked for our needs" BUT we haven't done extensive
> > > > testing and this is our first attempt to submit a patch so I kindly
> > > > ask for extra-forgiveness .
> > > > 
> > > > The "slavepci_passthru" driver is useful in the scenario described
> > > > below to implement a simplified passthru when the host CPU does not
> > > > support IOMMU and one is interested only in pci target-mode (slave
> > > > devices).
> > > Let's CC Alex, who worked on the most recent framework for something 
> > > related to that (VFIO).
> > > 
> > > > Embedded system cpu (e.g. Atom, AMD G-Series) often lack the VT-d
> > > > extensions (IOMMU) needed to be able to pass-thru pci peripherals to
> > > > the guest machine (i.e. the pci pass-thru feature cannot be used).
> > > > 
> > > > If one is only interested in using the pci board as a pci-target
> > > > (slave device), this driver mmap(s) the host-pci-bars into the guest
> > > > within a virtual pci-device.
> > What exactly do you mean by pci-target/slave device?  Does this mean
> > that the device is not DMA capable, ie. cannot enable BusMaster?
> 
> Yes, exactly. Our approach  can be used ONLY if one is NOT interested in 
> DMA-Capability (i.e. it is not possible to enable BusMaster)
> > > > This is useful in our case for debugging via qemu gsbserver facility
> > > > (i.e. '-s' option in qemu) a system running barebone-executable .
> > > > 
> > > > Currently the driver assumes the custom pci card has four 32-bit bars
> > > > to be mapped (in current patch this is mandatory)
> > > > 
> > > > HowTo:
> > > > To use the new driver one shall:
> > > > - define two environment variables for assigning proper VID and DID to
> > > >    associate to the guest pci card
> > > > - give the host pci bar address to map in the guest.
> > > > 
> > > > Example Usage:
> > > > 
> > > > Let us suppose that we have in the host a slave pci device with the
> > > > following 4 bars (i.e. output of lspci -v -s YOUR-CARD | grep Memory)
> > > >    Memory at db800000 (32-bit, non-prefetchable) [size=4K]
> > > >    Memory at db900000 (32-bit, non-prefetchable) [size=8K]
> > > >    Memory at dba00000 (32-bit, non-prefetchable) [size=4K]
> > > >    Memory at dbb00000 (32-bit, non-prefetchable) [size=4K]
> > > > 
> > > > We can map these bars in a guest-pci with VID=0xe33e DID=0x000a using
> > > > 
> > > > SLAVEPASSTHRU_VID="0xe33e" SLAVEPASSTHRU_DID="0xa" qemu-system-x86_64 \
> > > >    YOUR-SET-OF-FLAGS \
> > > >    -device
> > > >    
> > > > slavepassthru,size1=4096,baseaddr1=0xdb900000,size2=8192,baseaddr2=0xdba00000,size3=4096,baseaddr3=0xdbd00000,size4=4096,baseaddr4=0xdbe00000
> > > > 
> > > > Please note that if your device has less than four bars you can give
> > > > the same size and baseaddress to the unused bars.
> > Those are some pretty serious usage restrictions and using /dev/mem is
> > really not practical.  The resource files in pci-sysfs would even be a
> > better option.
> our was a quick hack to fulfill our needs, the approach via sysfs is
> of course the right one and we would implement it if this patch is of 
> interest.
> 
> > I didn't see how IO and MMIO BARs get enabled on the
> > physical device or whether you support any kind of interrupt scheme.
> In our case the IO space is not used.
> The MMIO space is already enabled.
> 
> Our custom board does not have any interrupt and our quick hack
> did not implement it.
> >    I
> > had never really intended QEMU use of this, but you might want to
> > consider vfio no-iommu mode:
> > 
> > http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/drivers/vfio/vfio.c?id=03a76b60f8ba27974e2d252bc555d2c103420e15
> > 
> > Using this taints the kernel, but maybe that's nothing you mind if
> > you're already letting QEMU access /dev/mem.  The QEMU vfio-pci driver
> > would need to be modified to use the new device and of course it
> > wouldn't have IOMMU translation capabilities.  That means that the
> > BusMaster bit should protected and MSI/X capabilities should be hidden
> > from the VM.  It seems more flexible and featureful than what you have
> > here.  Thanks,
> 
> I was not aware of this interesting patch, I will study it to see if
> it fits our use case.
> 
> Just for information you mean "taint" in that "security" is broken, not
> licensing issues, am I right?

Yes, it's only tainting for security, the driver is part of the
standard Linux kernel.  There's really no way to guarantee that we can
prevent the user from enabling BusMaster on a device capable of DMA,
even if we trapped access to that config space bit, devices often have
back doors to PCI config space, so it's best just to assume DMA is
possible and mark the host kernel as vulnerable.  Thanks,

Alex



reply via email to

[Prev in Thread] Current Thread [Next in Thread]