[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] assertion in temp_save
From: |
Aurelien Jarno |
Subject: |
Re: [Qemu-devel] assertion in temp_save |
Date: |
Tue, 20 Nov 2012 18:47:22 +0100 |
User-agent: |
Mutt/1.5.20 (2009-06-14) |
On Tue, Nov 20, 2012 at 05:09:57AM +0300, Max Filippov wrote:
> On Sun, Nov 18, 2012 at 7:34 AM, Max Filippov <address@hidden> wrote:
> > On Sun, Nov 18, 2012 at 7:19 AM, Max Filippov <address@hidden> wrote:
> >> Hi Aurelien,
> >>
> >> starting with commit 2c0366f tcg: don't explicitly save globals and temps
> >> I get the following abort on target-xtensa:
> >>
> >> qemu-system-xtensa: tcg/tcg.c:1665: temp_save: Assertion
> >> `s->temps[temp].val_type == 2 || s->temps[temp].fixed_reg' failed.
> >> Aborted
> >>
> >> I see that that commit only adds assertion and that bad thing happens
> >> elsewhere. I've found that removal of tcg_gen_discard_i32 in the
> >> gen_right_shift_sar makes it work again. The trace of the TB that fails
> >> translation is below. If 'discard loc5' is removed it starts to work.
> >>
> >> Any idea of what might be wrong?
> >
> > In the debugger loc5 looks like this when abort happens:
> >
> > (gdb) p s->temps[105]
> > $2 = {
> > base_type = TCG_TYPE_I32,
> > type = TCG_TYPE_I32,
> > val_type = 0,
> > reg = 11,
> > val = 32,
> > mem_reg = 4,
> > mem_offset = 128,
> > fixed_reg = 0,
> > mem_coherent = 0,
> > mem_allocated = 0,
> > temp_local = 1,
> > temp_allocated = 0,
> > next_free_temp = -1,
> > name = 0x0
> > }
>
> Looks like the issue is local temp reaching the end of TB in a dead state.
> Hence the question: is discard applicable to local temps?
> Or maybe I should just make it global (two other tcg values used with discard
> in other targets are also globals) and avoid temp_local_new/temp_free at
> first place?
>
Indeed, it looks like that discard doesn't work correctly with a temp
local with this new patch. I might have a fix, but I would like to do
some more tests first. Would it be possible to provide a way to reproduce
the issue?
--
Aurelien Jarno GPG: 1024D/F1BCDB73
address@hidden http://www.aurel32.net