qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Re: tcg/{ppc, s390, sparc}: branch target and code retr


From: Edgar E. Iglesias
Subject: Re: [Qemu-devel] Re: tcg/{ppc, s390, sparc}: branch target and code retranslation
Date: Mon, 10 Jan 2011 15:51:15 +0100
User-agent: Mutt/1.5.20 (2009-06-14)

On Mon, Jan 10, 2011 at 03:20:40PM +0100, Alexander Graf wrote:
> 
> On 10.01.2011, at 15:15, Aurelien Jarno wrote:
> 
> > On Mon, Jan 10, 2011 at 03:07:52PM +0100, Alexander Graf wrote:
> >> 
> >> On 10.01.2011, at 15:00, Aurelien Jarno wrote:
> >> 
> >>> On Mon, Jan 10, 2011 at 01:13:25PM +0100, Alexander Graf wrote:
> >>>> 
> >>>> On 06.01.2011, at 23:12, Aurelien Jarno wrote:
> >>>> 
> >>>>> Hi,
> >>>>> 
> >>>>> I have just sent a tcg/arm patch concerning code retranslation. You
> >>>>> might want to look at the description (copied below), as from a first
> >>>>> glance ppc, s390 and sparc TCG targets might be affected. If you see
> >>>>> guest kernel panics, some segmentation fault of qemu or in the guest,
> >>>>> strange behaviors, that happen randomly and that looks difficult to
> >>>>> debug it might be the issue.
> >>>>> 
> >>>>> Aurelien
> >>>>> 
> >>>>> 
> >>>>> | QEMU uses code retranslation to restore the CPU state when an 
> >>>>> exception
> >>>>> | happens. For it to work the retranslation must not modify the 
> >>>>> generated
> >>>>> | code. This is what is currently implemented in ARM TCG.
> >>>>> |
> >>>>> | However on CPU that don't have icache/dcache/memory synchronised like
> >>>>> | ARM, this requirement is stronger and code retranslation must not 
> >>>>> modify
> >>>>> | the generated code "atomically", as the cache line might be flushed
> >>>>> | at any moment (interrupt, exception, task switching), even if not
> >>>>> | triggered by QEMU. The probability for this to happen is very low, and
> >>>>> | depends on cache size and associativiy, machine load, interrupts, so 
> >>>>> the
> >>>>> | symptoms are might happen randomly.
> >>>>> |
> >>>>> | This requirement is currently not followed in tcg/arm, for the
> >>>>> | load/store code, which basically has the following structure:
> >>>>> |   1) tlb access code is written
> >>>>> |   2) conditional fast path code is written
> >>>>> |   3) branch is written with a temporary target
> >>>>> |   4) slow path code is written
> >>>>> |   5) branch target is updated
> >>>>> | The cache lines corresponding to the retranslated code is not flushed
> >>>>> | after code retranslation as the generated code is supposed to be the
> >>>>> | same. However if the cache line corresponding to the branch 
> >>>>> instruction
> >>>>> | is flushed between step 3 and 5, and is not flushed again before the
> >>>>> | code is executed again, the branch target is wrong. In the guest, the
> >>>>> | symptoms are MMU page fault at a random addresses, which leads to
> >>>>> | kernel page fault or segmentation faults.
> >>>> 
> >>>> I don't see the problem. If you have separate icache from dcache, the 
> >>>> code in question doesn't get executed during the rewrite, so all should 
> >>>> be fine. If you have both combined, the data write should automatically 
> >>>> modify the cache line, so all is great too.
> >>> 
> >>> As far as I understand, this is what happens to the branch target
> >>> instruction, or at least one possibility:
> >>> 
> >>> operation                                 | icache | dcache | mem/L2 |
> >>> ------------------------------------------+--------+--------+--------+
> >>> tlb access code is written                | absent | absent |   ok   |
> >>> conditional fast path code is written     | absent | absent |   ok   |
> >>> branch is written with a temporary target | absent | wrong  |   ok   |
> >>> cache line is flushed to memory           | absent | absent |  wrong |
> >>> slow path code is written                 | absent | absent |  wrong |
> >>> branch target is updated                  | absent |   ok   |  wrong |
> >>> TB is re-executed                         | wrong  |   ok   |  wrong |
> >>> 
> >>> Note that the issue is real, the patch really fixes the issue on ARM and
> >>> MIPS. It's not impossible that I don't fully understand it given I can't
> >>> analyse the cache values at a given time. However, when QEMU crashes 
> >>> because of that, we have seen that the executed code doesn't match what
> >>> GDB says, and changing the temporary branch value also changes the way 
> >>> QEMU or the guest crash.
> >>> 
> >>> The problem doesn't happens very often though (for sure less than 1 
> >>> every few millions). The other way to fix it is to issue a cache flush
> >>> after a code retranslation, however, it is something very costly on some
> >>> architectures.
> >> 
> >> I'd guess it only happens when code is overwritten. Only then icache can 
> >> still be stale in that code region. Can't we just invalidate the icache on 
> >> every tb flush? That should also fix the issue I'd guess.
> > 
> > That's a solution that works (tested). However making sure that the
> > retranslation doesn't change the code looks better.
> 
> Yeah, I agree. Temporary retranslation really shouldn't modify existing code. 
> We really do need an icache flush on tb flushes still though as that's a 
> separate issue? Or are these already in?

I don't think it's needed. When a tb gets flushed it can't execute any more
until it gets reallocated and a new translation is generated (which will
sync the caches).

IIUC, The problem with the search_pc regeneration shows up because we
don't explicitely sync the caches when done. If something in the host
system (e.g interrupts, exceptions, cache misses) trigs a cache flush
in the middle of the regeneration, we might end up with the temporarily
modified regenerated code in memory and the unmodified code in dcaches
only. For non-cocherent I & D caches, when the tb gets executed the
icache will pick up the possibly incorrect memory contents.

So either we don't modify the TB (even temporarily) while regenerating
or we do a cache sync after regeneration. If the latter doesn't slow
down exception handling to much it might be better because I'm
afraid this bug will come back and hunt us.. Not sure..

Cheers



reply via email to

[Prev in Thread] Current Thread [Next in Thread]