[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v4 11/19] hw/arm/smmu-common: Support nested translation
From: |
Mostafa Saleh |
Subject: |
Re: [PATCH v4 11/19] hw/arm/smmu-common: Support nested translation |
Date: |
Tue, 9 Jul 2024 07:15:37 +0000 |
Hi Jean,
On Thu, Jul 04, 2024 at 07:31:10PM +0100, Jean-Philippe Brucker wrote:
> On Mon, Jul 01, 2024 at 11:02:33AM +0000, Mostafa Saleh wrote:
> > When nested translation is requested, do the following:
> >
> > - Translate stage-1 table address IPA into PA through stage-2.
> > - Translate stage-1 table walk output (IPA) through stage-2.
> > - Create a single TLB entry from stage-1 and stage-2 translations
> > using logic introduced before.
> >
> > For stage-1 table translation, the spec (ARM IHI 0070 F.b) says in:
> > 7.3.12 F_WALK_EABT:
> > Translation of an IPA for Stage 1 descriptor fetch:
> > S2 == 1 (stage 2), CLASS == T
> > So, F_WALK_EABT is used which propagtes to CLASS == TT.
>
> I don't think the text applies to this case, the context is:
>
> A stage 2 table walk can encounter EABT accessing the physical
> address of a stage 2 descriptor, because of a:
> [...]
> * Translation of an IPA for Stage 1 descriptor fetch
>
> So EABT is when failing to load the stage 2 descriptor. I can't find
> exact text for this case but looking at the flowchart 15.5, I think
> this should be F_TRANSLATION/F_ADDR_SIZE/F_PERMISSION/F_ACCESS with
> CLASS=TT and S2.
I see, thanks for clarifying, I guess that would be another argument that
we propagate more info in SMMUPTWEventInfo so we can set the fault CLASS.
Thanks,
Mostafa
>
> Thanks,
> Jean
>
> >
> > smmu_ptw() has a new argument SMMUState which include the TLB as
> > stage-1 table address can be cached in there.
> >
> > Also in smmu_ptw() a separate path used for nesting to simplify the
> > code, although some logic can be combined.
> >
> > Signed-off-by: Mostafa Saleh <smostafa@google.com>
> > ---
> > hw/arm/smmu-common.c | 72 +++++++++++++++++++++++++++++++-----
> > include/hw/arm/smmu-common.h | 2 +-
> > 2 files changed, 64 insertions(+), 10 deletions(-)
> >
> > diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
> > index 24b7d09e2b..71afd486ba 100644
> > --- a/hw/arm/smmu-common.c
> > +++ b/hw/arm/smmu-common.c
> > @@ -318,6 +318,38 @@ SMMUTransTableInfo *select_tt(SMMUTransCfg *cfg,
> > dma_addr_t iova)
> > return NULL;
> > }
> >
> > +/* Translate stage-1 table address using stage-2 page table. */
> > +static inline int translate_table_addr_ipa(dma_addr_t *table_addr,
> > + SMMUTransCfg *cfg,
> > + SMMUPTWEventInfo *info,
> > + SMMUState *bs)
> > +{
> > + dma_addr_t addr = *table_addr;
> > + SMMUTLBEntry *cached_entry;
> > + int asid;
> > +
> > + /*
> > + * The translation table walks performed from TTB0 or TTB1 are always
> > + * performed in IPA space if stage 2 translations are enabled.
> > + */
> > + asid = cfg->asid;
> > + cfg->stage = SMMU_STAGE_2;
> > + cfg->asid = -1;
> > + cached_entry = smmu_translate(bs, cfg, addr, IOMMU_RO, info);
> > + cfg->asid = asid;
> > + cfg->stage = SMMU_NESTED;
> > +
> > + if (cached_entry) {
> > + *table_addr = CACHED_ENTRY_TO_ADDR(cached_entry, addr);
> > + return 0;
> > + }
> > +
> > + info->stage = SMMU_STAGE_2;
> > + info->type = SMMU_PTW_ERR_WALK_EABT;
> > + info->addr = addr;
> > + return -EINVAL;
> > +}
> > +
> > /**
> > * smmu_ptw_64_s1 - VMSAv8-64 Walk of the page tables for a given IOVA
> > * @cfg: translation config
> > @@ -333,7 +365,8 @@ SMMUTransTableInfo *select_tt(SMMUTransCfg *cfg,
> > dma_addr_t iova)
> > */
> > static int smmu_ptw_64_s1(SMMUTransCfg *cfg,
> > dma_addr_t iova, IOMMUAccessFlags perm,
> > - SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info)
> > + SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info,
> > + SMMUState *bs)
> > {
> > dma_addr_t baseaddr, indexmask;
> > SMMUStage stage = cfg->stage;
> > @@ -381,6 +414,11 @@ static int smmu_ptw_64_s1(SMMUTransCfg *cfg,
> > goto error;
> > }
> > baseaddr = get_table_pte_address(pte, granule_sz);
> > + if (cfg->stage == SMMU_NESTED) {
> > + if (translate_table_addr_ipa(&baseaddr, cfg, info, bs)) {
> > + goto error;
> > + }
> > + }
> > level++;
> > continue;
> > } else if (is_page_pte(pte, level)) {
> > @@ -568,10 +606,8 @@ error:
> > * combine S1 and S2 TLB entries into a single entry.
> > * As a result the S1 entry is overriden with combined data.
> > */
> > -static void __attribute__((unused)) combine_tlb(SMMUTLBEntry *tlbe,
> > - SMMUTLBEntry *tlbe_s2,
> > - dma_addr_t iova,
> > - SMMUTransCfg *cfg)
> > +static void combine_tlb(SMMUTLBEntry *tlbe, SMMUTLBEntry *tlbe_s2,
> > + dma_addr_t iova, SMMUTransCfg *cfg)
> > {
> > if (tlbe_s2->entry.addr_mask < tlbe->entry.addr_mask) {
> > tlbe->entry.addr_mask = tlbe_s2->entry.addr_mask;
> > @@ -596,14 +632,19 @@ static void __attribute__((unused))
> > combine_tlb(SMMUTLBEntry *tlbe,
> > * @perm: tentative access type
> > * @tlbe: returned entry
> > * @info: ptw event handle
> > + * @bs: smmu state which includes TLB instance
> > *
> > * return 0 on success
> > */
> > int smmu_ptw(SMMUTransCfg *cfg, dma_addr_t iova, IOMMUAccessFlags perm,
> > - SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info)
> > + SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info, SMMUState *bs)
> > {
> > + int ret;
> > + SMMUTLBEntry tlbe_s2;
> > + dma_addr_t ipa;
> > +
> > if (cfg->stage == SMMU_STAGE_1) {
> > - return smmu_ptw_64_s1(cfg, iova, perm, tlbe, info);
> > + return smmu_ptw_64_s1(cfg, iova, perm, tlbe, info, bs);
> > } else if (cfg->stage == SMMU_STAGE_2) {
> > /*
> > * If bypassing stage 1(or unimplemented), the input address is
> > passed
> > @@ -621,7 +662,20 @@ int smmu_ptw(SMMUTransCfg *cfg, dma_addr_t iova,
> > IOMMUAccessFlags perm,
> > return smmu_ptw_64_s2(cfg, iova, perm, tlbe, info);
> > }
> >
> > - g_assert_not_reached();
> > + /* SMMU_NESTED. */
> > + ret = smmu_ptw_64_s1(cfg, iova, perm, tlbe, info, bs);
> > + if (ret) {
> > + return ret;
> > + }
> > +
> > + ipa = CACHED_ENTRY_TO_ADDR(tlbe, iova);
> > + ret = smmu_ptw_64_s2(cfg, ipa, perm, &tlbe_s2, info);
> > + if (ret) {
> > + return ret;
> > + }
> > +
> > + combine_tlb(tlbe, &tlbe_s2, iova, cfg);
> > + return 0;
> > }
> >
> > SMMUTLBEntry *smmu_translate(SMMUState *bs, SMMUTransCfg *cfg, dma_addr_t
> > addr,
> > @@ -677,7 +731,7 @@ SMMUTLBEntry *smmu_translate(SMMUState *bs,
> > SMMUTransCfg *cfg, dma_addr_t addr,
> > }
> >
> > cached_entry = g_new0(SMMUTLBEntry, 1);
> > - status = smmu_ptw(cfg, aligned_addr, flag, cached_entry, info);
> > + status = smmu_ptw(cfg, aligned_addr, flag, cached_entry, info, bs);
> > if (status) {
> > g_free(cached_entry);
> > return NULL;
> > diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
> > index 1db566d451..cf0fd3ec74 100644
> > --- a/include/hw/arm/smmu-common.h
> > +++ b/include/hw/arm/smmu-common.h
> > @@ -185,7 +185,7 @@ static inline uint16_t smmu_get_sid(SMMUDevice *sdev)
> > * pair, according to @cfg translation config
> > */
> > int smmu_ptw(SMMUTransCfg *cfg, dma_addr_t iova, IOMMUAccessFlags perm,
> > - SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info);
> > + SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info, SMMUState *bs);
> >
> >
> > /*
> > --
> > 2.45.2.803.g4e1b14247a-goog
> >
- Re: [PATCH v4 09/19] hw/arm/smmu-common: Rework TLB lookup for nesting, (continued)
[PATCH v4 12/19] hw/arm/smmu: Support nesting in smmuv3_range_inval(), Mostafa Saleh, 2024/07/01
[PATCH v4 13/19] hw/arm/smmu: Introduce smmu_iotlb_inv_asid_vmid, Mostafa Saleh, 2024/07/01
[PATCH v4 11/19] hw/arm/smmu-common: Support nested translation, Mostafa Saleh, 2024/07/01
[PATCH v4 15/19] hw/arm/smmuv3: Support nested SMMUs in smmuv3_notify_iova(), Mostafa Saleh, 2024/07/01
[PATCH v4 18/19] hw/arm/smmuv3: Advertise S2FWB, Mostafa Saleh, 2024/07/01