qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH for-2.9 2/2] intel_iommu: extend supported guest


From: Peter Xu
Subject: Re: [Qemu-devel] [PATCH for-2.9 2/2] intel_iommu: extend supported guest aw to 48 bits
Date: Tue, 13 Dec 2016 13:24:29 +0800
User-agent: Mutt/1.5.24 (2015-08-30)

On Mon, Dec 12, 2016 at 08:51:50PM -0700, Alex Williamson wrote:

[...]

> > > I'm not sure how the vIOMMU supporting 39 bits or 48 bits is directly
> > > relevant to vfio, we're not sharing page tables.  There is already a
> > > case today, without vIOMMU that you can make a guest which has more
> > > guest physical address space than the hardware IOMMU by overcommitting
> > > system memory.  Generally this quickly resolves itself when we start
> > > pinning pages since the physical address width of the IOMMU is
> > > typically the same as the physical address width of the host system
> > > (ie. we exhaust the host memory).  
> > 
> > Hi, Alex,
> > 
> > Here does "hardware IOMMU" means the IOMMU iova address space width?
> > For example, if guest has 48 bits physical address width (without
> > vIOMMU), but host hardware IOMMU only supports 39 bits for its iova
> > address space, could device assigment work in this case?
> 
> The current usage depends entirely on what the user (VM) tries to map.
> You could expose a vIOMMU with a 64bit address width, but the moment
> you try to perform a DMA mapping with IOVA beyond bit 39 (if that's the
> host IOMMU address width), the ioctl will fail and the VM will abort.
> IOW, you can claim whatever vIOMMU address width you want, but if you
> layout guest memory or devices in such a way that actually require IOVA
> mapping beyond the host capabilities, you're going to abort.  Likewise,
> without a vIOMMU if the guest memory layout is sufficiently sparse to
> require such IOVAs, you're going to abort.  Thanks,

Thanks for the explanation. I got the point.

However, should we allow guest behaviors affect hypervisor? In this
case, if guest maps IOVA range over 39 bits (assuming vIOMMU is
declaring itself with 48 bits address width), the VM will crash. How
about we shrink vIOMMU address width to 39 bits during boot if we
detected that assigned devices are configured? IMHO no matter what we
do in the guest, the hypervisor should keep the guest alive from
hypervisor POV (emulation of the guest hardware should not be stopped
by guest behavior). If any operation in guest can cause hypervisor
down, isn't it a bug?

Thanks,

-- peterx



reply via email to

[Prev in Thread] Current Thread [Next in Thread]