qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] memory: do not do out of bound notification


From: Yan Zhao
Subject: Re: [Qemu-devel] [PATCH] memory: do not do out of bound notification
Date: Wed, 19 Jun 2019 21:46:02 -0400
User-agent: Mutt/1.9.4 (2018-02-28)

hi Eric,
Thanks for your reply.

On Wed, Jun 19, 2019 at 09:17:41PM +0800, Auger Eric wrote:
> Hi Yan,
> 
> [+ Peter]
> 
> On 6/19/19 10:49 AM, Yan Zhao wrote:
> > even if an entry overlaps with notifier's range, should not map/unmap
> > out of bound part in the entry.
> 
> I don't think the patch was based on the master as the trace at the very
> end if not part of the upstream code.
> > 
It's indeed based on the latest master branch. but I added a debug log
and forgot to remove that before sending out the patch. sorry for that :)


> > This would cause problem in below case:
> > 1. initially there are two notifiers with ranges
> > 0-0xfedfffff, 0xfef00000-0xffffffffffffffff,
> > IOVAs from 0x3c000000 - 0x3c1fffff is in shadow page table.
> > 
> > 2. in vfio, memory_region_register_iommu_notifier() is followed by
> > memory_region_iommu_replay(), which will first call address space unmap,
> > and walk and add back all entries in vtd shadow page table. e.g.
> > (1) for notifier 0-0xfedfffff,
> >     IOVAs from 0 - 0xffffffff get unmapped,
> >     and IOVAs from 0x3c000000 - 0x3c1fffff get mapped
> 
> While the patch looks sensible, the issue is the notifier scope used in
> vtd_address_space_unmap is not a valid mask (ctpop64(size) != 1). Then
> the size is recomputed (either using n = 64 - clz64(size) for the 1st
> notifier or n = s->aw_bits for the 2d) and also the entry (especially
> for the 2d notifier where it becomes 0) to get a proper alignment.
>
maybe the size is calculated right, but 0 for the 2d notifier is because
this line below ?
 entry.iova = n->start & ~(size - 1);

> vtd_page_walk sends notifications per block or page (with valid
> addr_mask) so stays within the notifier.
> 
> Modifying the entry->iova/addr_mask again in memory_region_notify_one
> leads to unaligned start address / addr_mask. I don't think we want that.
>
if the notifier's start and end is aligned, and entry->iova/addr_mask is
aligned before modification,  then after modification, the start addr
/addr_mask are still aligned ?

> Can't we modity the vtd_address_space_unmap() implementation to split
> the invalidation in smaller chunks instead?
>
as this is an API, it cannot reply on the caller to ensure the entry is
within its address range. Do you think it's reasonable?

Thanks
Yan


> Thanks
> 
> Eric
> 
> 
> > (2) for notifier 0xfef00000-0xffffffffffffffff
> >     IOVAs from 0 - 0x7fffffffff get unmapped,>     but IOVAs from 
> > 0x3c000000 - 0x3c1fffff cannot get mapped back.
> > 
> > Signed-off-by: Yan Zhao <address@hidden>
> > ---
> >  memory.c | 8 ++++++++
> >  1 file changed, 8 insertions(+)
> > 
> > diff --git a/memory.c b/memory.c
> > index 07c8315..a6b9da6 100644
> > --- a/memory.c
> > +++ b/memory.c
> > @@ -1948,6 +1948,14 @@ void memory_region_notify_one(IOMMUNotifier 
> > *notifier,
> >          return;
> >      }
> >  
> > +    if (entry->iova < notifier->start) {
> > +        entry->iova = notifier->start;
> > +    }
> > +
> > +    if (entry->iova + entry->addr_mask > notifier->end) {
> > +        entry->addr_mask = notifier->end - entry->iova;> +    }
> > +
> >      if (entry->perm & IOMMU_RW) {
> >          printf("map %lx %lx\n", entry->iova, entry->iova + 
> > entry->addr_mask);
> >          request_flags = IOMMU_NOTIFIER_MAP;
> 
> > 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]