qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 03/10] intel-iommu: add iommu lock


From: Peter Xu
Subject: Re: [Qemu-devel] [PATCH 03/10] intel-iommu: add iommu lock
Date: Fri, 27 Apr 2018 17:53:52 +0800
User-agent: Mutt/1.9.1 (2017-09-22)

On Fri, Apr 27, 2018 at 07:19:25AM +0000, Tian, Kevin wrote:
> > From: Peter Xu
> > Sent: Friday, April 27, 2018 2:26 PM
> > 
> > On Fri, Apr 27, 2018 at 01:13:02PM +0800, Jason Wang wrote:
> > >
> > >
> > > On 2018年04月25日 12:51, Peter Xu wrote:
> > > > Add a per-iommu big lock to protect IOMMU status.  Currently the only
> > > > thing to be protected is the IOTLB cache, since that can be accessed
> > > > even without BQL, e.g., in IO dataplane.
> > > >
> > > > Note that device page tables should not need any protection.  The
> > safety
> > > > of that should be provided by guest OS.  E.g., when a page entry is
> > > > freed, the guest OS should be responsible to make sure that no device
> > > > will be using that page any more.
> 
> device page table definitely doesn't require protection, since it is
> in-memory structure managed by guest. However the reason
> above is not accurate - there is no way that guest OS can make
> sure no device uses non-present page entry, otherwise it doesn't
> require virtual IOMMU to protect itself. There could be bogus/
> malicious drivers which surely may program the device to attempt so.

How about this:

  Note that we don't need to protect device page tables since that's
  fully controlled by the guest kernel.  However there is still
  possibilities that malicious drivers will still program the device
  to not disobey the rule.  In that case QEMU can't really do anything
  useful, instead the guest itself will be responsible for all
  uncertainties.

> 
> > > >
> > > > Reported-by: Fam Zheng<address@hidden>
> > > > Signed-off-by: Peter Xu<address@hidden>
> > > > ---
> > > >   include/hw/i386/intel_iommu.h |  8 ++++++++
> > > >   hw/i386/intel_iommu.c         | 31 +++++++++++++++++++++++++++++--
> > > >   2 files changed, 37 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/include/hw/i386/intel_iommu.h
> > b/include/hw/i386/intel_iommu.h
> > > > index 220697253f..1a8ba8e415 100644
> > > > --- a/include/hw/i386/intel_iommu.h
> > > > +++ b/include/hw/i386/intel_iommu.h
> > > > @@ -262,6 +262,14 @@ struct IntelIOMMUState {
> > > >       uint8_t w1cmask[DMAR_REG_SIZE]; /* RW1C(Write 1 to Clear) bytes
> > */
> > > >       uint8_t womask[DMAR_REG_SIZE];  /* WO (write only - read returns
> > 0) */
> > > >       uint32_t version;
> > > > +    /*
> > > > +     * Protects IOMMU states in general.  Normally we don't need to
> > > > +     * take this lock when we are with BQL held.  However we have code
> > > > +     * paths that may run even without BQL.  In those cases, we need
> > > > +     * to take the lock when we have access to IOMMU state
> > > > +     * informations, e.g., the IOTLB.
> 
> better if you can whitelist those paths here to clarify.

Sure. Basically it's the translation path (vtd_iommu_translate).

> 
> > > > +     */
> > > > +    QemuMutex iommu_lock;
> > >
> > > Some questions:
> > >
> > > 1) Do we need to protect context cache too?
> > 
> > IMHO the context cache entry should work even without lock.  That's a
> > bit trickly since we have two cases that this cache will be updated:
> > 
> >   (1) first translation of the address space of a device
> >   (2) invalidation of context entries
> > 
> > For (2) IMHO we don't need to worry about since guest OS should be
> > controlling that part, say, device should not be doing any translation
> > (DMA operations) when the context entry is invalidated.
> 
> again above cannot be assumed.

Yeah, but in that case IMHO it's still the same just like page tables
- we can't control anything really, and the guest itself will be
responsible for any undefined subsequences.

Anyway, let me protect that field too in my next version.

Thanks,

-- 
Peter Xu



reply via email to

[Prev in Thread] Current Thread [Next in Thread]