qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v1 1/3] intel-iommu: differentiate host address


From: Yu Zhang
Subject: Re: [Qemu-devel] [PATCH v1 1/3] intel-iommu: differentiate host address width from IOVA address width.
Date: Mon, 12 Nov 2018 17:28:25 +0800
User-agent: NeoMutt/20180622-66-b94505

On Mon, Nov 12, 2018 at 04:15:46PM +0800, Peter Xu wrote:
> On Fri, Nov 09, 2018 at 07:49:45PM +0800, Yu Zhang wrote:
> > Currently, vIOMMU is using the value of IOVA address width, instead of
> > the host address width(HAW) to calculate the number of reserved bits in
> > data structures such as root entries, context entries, and entries of
> > DMA paging structures etc.
> > 
> > However values of IOVA address width and of the HAW may not equal. For
> > example, a 48-bit IOVA can only be mapped to host addresses no wider than
> > 46 bits. Using 48, instead of 46 to calculate the reserved bit may result
> > in an invalid IOVA being accepted.
> > 
> > To fix this, a new field - haw_bits is introduced in struct IntelIOMMUState,
> > whose value is initialized based on the maximum physical address set to
> > guest CPU. Also, definitions such as VTD_HOST_AW_39/48BIT etc. are renamed
> > to clarify.
> 
> IIRC I raised this question some time ago somewhere but no one
> remembered to follow that up.  Thanks for fixing it.
> 
> It looks mostly good to me, only one tiny comment below...
> 
> [...]
> 
> > @@ -887,6 +887,7 @@ static int vtd_page_walk_level(dma_addr_t addr, 
> > uint64_t start,
> >      uint64_t iova = start;
> >      uint64_t iova_next;
> >      int ret = 0;
> > +    uint8_t haw = info->as->iommu_state->haw_bits;
> 
> For now vtd_page_walk_info->aw_bits caches the GAW information and we
> use a single vtd_page_walk_info during one page walk, maybe we can
> also do the same for HAW instead of fetching it every time here from
> info->as->iommu_state->haw_bits?

Thank you, Peter. And yes, using haw_bits in vtd_page_walk_info is better. :)

> 
> Regards,
> 
> -- 
> Peter Xu
> 

B.R.
Yu



reply via email to

[Prev in Thread] Current Thread [Next in Thread]