qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] softmmu: Always initialize xlat in address_space_translate_f


From: Peter Maydell
Subject: Re: [PATCH] softmmu: Always initialize xlat in address_space_translate_for_iotlb
Date: Tue, 21 Jun 2022 16:06:55 +0100

On Mon, 20 Jun 2022 at 17:54, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> On 6/20/22 05:52, Peter Maydell wrote:
> > On Wed, 15 Jun 2022 at 17:43, Richard Henderson
> > <richard.henderson@linaro.org> wrote:
> >>
> >> The bug is an uninitialized memory read, along the translate_fail
> >> path, which results in garbage being read from iotlb_to_section,
> >> which can lead to a crash in io_readx/io_writex.
> >>
> >> The bug may be fixed by writing any value with zero
> >> in ~TARGET_PAGE_MASK, so that the call to iotlb_to_section using
> >> the xlat'ed address returns io_mem_unassigned, as desired by the
> >> translate_fail path.
> >>
> >> It is most useful to record the original physical page address,
> >> which will eventually be logged by memory_region_access_valid
> >> when the access is rejected by unassigned_mem_accepts.
> >>
> >> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1065
> >> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> >> ---
> >>   softmmu/physmem.c | 3 +++
> >>   1 file changed, 3 insertions(+)
> >>
> >> diff --git a/softmmu/physmem.c b/softmmu/physmem.c
> >> index 657841eed0..fb0f0709b5 100644
> >> --- a/softmmu/physmem.c
> >> +++ b/softmmu/physmem.c
> >> @@ -681,6 +681,9 @@ address_space_translate_for_iotlb(CPUState *cpu, int 
> >> asidx, hwaddr addr,
> >>       AddressSpaceDispatch *d =
> >>           qatomic_rcu_read(&cpu->cpu_ases[asidx].memory_dispatch);
> >>
> >> +    /* Record the original phys page for use by the translate_fail path. 
> >> */
> >> +    *xlat = addr;
> >
> > There's no doc comment for address_space_translate_for_iotlb(),
> > so there's nothing that says explicitly that addr is obliged
> > to be page aligned, although it happens that its only caller
> > does pass a page-aligned address. Were we already implicitly
> > requiring a page-aligned address here, or does not masking
> > addr before assigning to *xlat impose a new requirement ?
>
> I have no idea.  The whole lookup process is both undocumented and twistedly 
> complex.  I'm
> willing to add an extra masking operation here, if it seems necessary?

I think we should do one of:
 * document that we assume the address is page-aligned
 * assert that the address is page-aligned
 * mask to force it to page-alignedness

but I much don't care which one of those we do. Maybe we should
assert((*xlat & ~TARGET_PAGE_MASK) == 0) at the translate_fail
label, with a suitable comment ?

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]