qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/2] exec: flush CPU TB cache in breakpoint_invalidate


From: Max Filippov
Subject: Re: [PATCH 1/2] exec: flush CPU TB cache in breakpoint_invalidate
Date: Wed, 5 Feb 2020 13:14:36 -0800

On Wed, Feb 5, 2020 at 3:00 AM Richard Henderson
<address@hidden> wrote:
>
> On 11/27/19 10:06 PM, Max Filippov wrote:
> > When a breakpoint is inserted at location for which there's currently no
> > virtual to physical translation no action is taken on CPU TB cache. If a
> > TB for that virtual address already exists but is not visible ATM the
> > breakpoint won't be hit next time an instruction at that address will be
> > executed.
> >
> > Flush entire CPU TB cache in breakpoint_invalidate to force
> > re-translation of all TBs for the breakpoint address.
> >
> > This change fixes the following scenario:
> > - linux user application is running
> > - a breakpoint is inserted from QEMU gdbstub for a user address that is
> >   not currently present in the target CPU TLB
> > - an instruction at that address is executed, but the external debugger
> >   doesn't get control.
> >
> > Signed-off-by: Max Filippov <address@hidden>
> > ---
> > Changes RFC->v1:
> > - do tb_flush in breakpoint_invalidate unconditionally
>
> I know I had reservations about this, but we now have two patches on list that
> fix the problem in this way.
>
> What I would *like* is for each CPUBreakpoint to maintain a list of the TBs to
> which it has been applied, so that each can be invalidated.

I don't see how this can fix this issue: it's not the list of TBs that
we want to
invalidate, it's the TBs that get associated with new virtual addresses that
are currently causing the issue, right?

>  Our current
> management of breakpoints are IMO sloppy.
>
> That said, I don't really have time to work on cleaning this up myself in the
> short term, and this is fixing a real bug.  Therefore, I am going to queue 
> this
> to tcg-next.
>
> I would still like patch 2/2 to be split, and that can probably go through an
> xtensa branch.

Will do.

-- 
Thanks.
-- Max



reply via email to

[Prev in Thread] Current Thread [Next in Thread]