[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PATCH v2 08/54] accel/tcg: Flush entire tlb when a masked range wraps
From: |
Richard Henderson |
Subject: |
[PATCH v2 08/54] accel/tcg: Flush entire tlb when a masked range wraps |
Date: |
Thu, 14 Nov 2024 08:00:44 -0800 |
We expect masked address spaces to be quite large, e.g. 56 bits
for AArch64 top-byte-ignore mode. We do not expect addr+len to
wrap around, but it is possible with AArch64 guest flush range
instructions.
Convert this unlikely case to a full tlb flush. This can simplify
the subroutines actually performing the range flush.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/cputlb.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 5510f40333..31c45a6213 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -802,6 +802,11 @@ void tlb_flush_range_by_mmuidx(CPUState *cpu, vaddr addr,
tlb_flush_page_by_mmuidx(cpu, addr, idxmap);
return;
}
+ /* If addr+len wraps in len bits, fall back to full flush. */
+ if (bits < TARGET_LONG_BITS && ((addr ^ (addr + len - 1)) >> bits)) {
+ tlb_flush_by_mmuidx(cpu, idxmap);
+ return;
+ }
/* This should already be page aligned */
d.addr = addr & TARGET_PAGE_MASK;
@@ -838,6 +843,11 @@ void tlb_flush_range_by_mmuidx_all_cpus_synced(CPUState
*src_cpu,
tlb_flush_page_by_mmuidx_all_cpus_synced(src_cpu, addr, idxmap);
return;
}
+ /* If addr+len wraps in len bits, fall back to full flush. */
+ if (bits < TARGET_LONG_BITS && ((addr ^ (addr + len - 1)) >> bits)) {
+ tlb_flush_by_mmuidx_all_cpus_synced(src_cpu, idxmap);
+ return;
+ }
/* This should already be page aligned */
d.addr = addr & TARGET_PAGE_MASK;
--
2.43.0
- Re: [PATCH v2 01/54] util/interval-tree: Introduce interval_tree_free_nodes, (continued)
- [PATCH v2 05/54] accel/tcg: Fix flags usage in mmu_lookup1, atomic_mmu_lookup, Richard Henderson, 2024/11/14
- [PATCH v2 03/54] accel/tcg: Split out tlbfast_{index,entry}, Richard Henderson, 2024/11/14
- [PATCH v2 02/54] accel/tcg: Split out tlbfast_flush_locked, Richard Henderson, 2024/11/14
- [PATCH v2 06/54] accel/tcg: Assert non-zero length in tlb_flush_range_by_mmuidx*, Richard Henderson, 2024/11/14
- [PATCH v2 04/54] accel/tcg: Split out tlbfast_flush_range_locked, Richard Henderson, 2024/11/14
- [PATCH v2 08/54] accel/tcg: Flush entire tlb when a masked range wraps,
Richard Henderson <=
- [PATCH v2 09/54] accel/tcg: Add IntervalTreeRoot to CPUTLBDesc, Richard Henderson, 2024/11/14
- [PATCH v2 07/54] accel/tcg: Assert bits in range in tlb_flush_range_by_mmuidx*, Richard Henderson, 2024/11/14
- [PATCH v2 12/54] accel/tcg: Remove IntervalTree entries in tlb_flush_range_locked, Richard Henderson, 2024/11/14
- [PATCH v2 11/54] accel/tcg: Remove IntervalTree entry in tlb_flush_page_locked, Richard Henderson, 2024/11/14
- [PATCH v2 10/54] accel/tcg: Populate IntervalTree in tlb_set_page_full, Richard Henderson, 2024/11/14