On Wed, 15 Jun 2022 at 17:43, Richard Henderson
<richard.henderson@linaro.org> wrote:
The bug is an uninitialized memory read, along the translate_fail
path, which results in garbage being read from iotlb_to_section,
which can lead to a crash in io_readx/io_writex.
The bug may be fixed by writing any value with zero
in ~TARGET_PAGE_MASK, so that the call to iotlb_to_section using
the xlat'ed address returns io_mem_unassigned, as desired by the
translate_fail path.
It is most useful to record the original physical page address,
which will eventually be logged by memory_region_access_valid
when the access is rejected by unassigned_mem_accepts.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1065
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
softmmu/physmem.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/softmmu/physmem.c b/softmmu/physmem.c
index 657841eed0..fb0f0709b5 100644
--- a/softmmu/physmem.c
+++ b/softmmu/physmem.c
@@ -681,6 +681,9 @@ address_space_translate_for_iotlb(CPUState *cpu, int asidx,
hwaddr addr,
AddressSpaceDispatch *d =
qatomic_rcu_read(&cpu->cpu_ases[asidx].memory_dispatch);
+ /* Record the original phys page for use by the translate_fail path. */
+ *xlat = addr;
There's no doc comment for address_space_translate_for_iotlb(),
so there's nothing that says explicitly that addr is obliged
to be page aligned, although it happens that its only caller
does pass a page-aligned address. Were we already implicitly
requiring a page-aligned address here, or does not masking
addr before assigning to *xlat impose a new requirement ?