qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH RFC v4 18/20] intel_iommu: enable vfio devices


From: Alex Williamson
Subject: Re: [Qemu-devel] [PATCH RFC v4 18/20] intel_iommu: enable vfio devices
Date: Mon, 23 Jan 2017 11:03:08 -0700

On Mon, 23 Jan 2017 11:34:29 +0800
Peter Xu <address@hidden> wrote:

> On Mon, Jan 23, 2017 at 09:55:39AM +0800, Jason Wang wrote:
> > 
> > 
> > On 2017年01月22日 17:04, Peter Xu wrote:  
> > >On Sun, Jan 22, 2017 at 04:08:04PM +0800, Jason Wang wrote:
> > >
> > >[...]
> > >  
> > >>>+static void vtd_iotlb_page_invalidate_notify(IntelIOMMUState *s,
> > >>>+                                           uint16_t domain_id, hwaddr 
> > >>>addr,
> > >>>+                                           uint8_t am)
> > >>>+{
> > >>>+    IntelIOMMUNotifierNode *node;
> > >>>+    VTDContextEntry ce;
> > >>>+    int ret;
> > >>>+
> > >>>+    QLIST_FOREACH(node, &(s->notifiers_list), next) {
> > >>>+        VTDAddressSpace *vtd_as = node->vtd_as;
> > >>>+        ret = vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
> > >>>+                                       vtd_as->devfn, &ce);
> > >>>+        if (!ret && domain_id == VTD_CONTEXT_ENTRY_DID(ce.hi)) {
> > >>>+            vtd_page_walk(&ce, addr, addr + (1 << am) * VTD_PAGE_SIZE,
> > >>>+                          vtd_page_invalidate_notify_hook,
> > >>>+                          (void *)&vtd_as->iommu, true);  
> > >>Why not simply trigger the notifier here? (or is this vfio required?)  
> > >Because we may only want to notify part of the region - we are with
> > >mask here, but not exact size.
> > >
> > >Consider this: guest (with caching mode) maps 12K memory (4K*3 pages),
> > >the mask will be extended to 16K in the guest. In that case, we need
> > >to explicitly go over the page entry to know that the 4th page should
> > >not be notified.  
> > 
> > I see. Then it was required by vfio only, I think we can add a fast path for
> > !CM in this case by triggering the notifier directly.  
> 
> I noted this down (to be further investigated in my todo), but I don't
> know whether this can work, due to the fact that I think it is still
> legal that guest merge more than one PSIs into one. For example, I
> don't know whether below is legal:
> 
> - guest invalidate page (0, 4k)
> - guest map new page (4k, 8k)
> - guest send single PSI of (0, 8k)
> 
> In that case, it contains both map/unmap, and looks like it didn't
> disobay the spec as well?

The topic of mapping and invalidation granularity also makes me
slightly concerned with the abstraction we use for the type1 IOMMU
backend.  With the "v2" type1 configuration we currently use in QEMU,
the user may only unmap with the same minimum granularity with which
the original mapping was created.  For instance if an iommu notifier
map request gets to vfio with an 8k range, the resulting mapping can
only be removed by an invalidation covering the full range.  Trying to
bisect that original mapping by only invalidating 4k of the range will
generate an error.

I would think (but please confirm), that when we're only tracking
mappings generated by the guest OS that this works.  If the guest OS
maps with 4k pages, we get map notifies for each of those 4k pages.  If
they use 2MB pages, we get 2MB ranges and invalidations will come in
the same granularity.

An area of concern though is the replay mechanism in QEMU, I'll need to
look for it in the code, but replaying an IOMMU domain into a new
container *cannot* coalesce mappings or else it limits the granularity
with which we can later accept unmaps.  Take for instance a guest that
has mapped a contiguous 2MB range with 4K pages.  They can unmap any 4K
page within that range.  However if vfio gets a single 2MB mapping
rather than 512 4K mappings, then the host IOMMU may use a hugepage
mapping where our granularity is now 2MB.  Thanks,

Alex



reply via email to

[Prev in Thread] Current Thread [Next in Thread]