qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] alpha qemu arithmetic exceptions


From: Al Viro
Subject: Re: [Qemu-devel] [RFC] alpha qemu arithmetic exceptions
Date: Fri, 4 Jul 2014 01:50:24 +0100
User-agent: Mutt/1.5.21 (2010-09-15)

On Thu, Jul 03, 2014 at 01:19:19PM -0700, Richard Henderson wrote:

> I believe I have a tidy solution to these /v insns.  New patch set shortly.

OK, looks sane.  Next (trivial) bug: in translate_one()
        case 0xF800:
            /* WH64 */
            /* No-op */
            break;
should be followed by
        case 0xFC00:
            /* WH64EN */
            /* No-op */
            break;

As it is,
        asm __volatile( "lda    $0,%0\n\t"
                        "wh64en ($0)\n\t" :: "m"(r));
ends sending SIGILL.

Another one is probably not worth bothering - PERR, CTPOP, CTLZ, UNPKBx and PKxB
don't accept literal argument.  For one thing, as(1) won't let you generate
those, so it would have to be explicit
        .long 0x70001620
instead of
        perr $0,0,$0
On DS10 it gives SIGILL; under qemu it succeeds.  Trivial to fix, anyway,
if we care about that (if (islit) goto invalid_opc; in 1C.030..1C.037).

Another interesting bit I _really_ don't want to touch right now is LDx_L/STx_C;
what we get there is closer to compare-and-swap than to what the real
hardware is doing.  OTOH, considering the constraints on what can go between
LDx_L and STx_C, I'm not sure whether it can lead to any real problems with
the current qemu behaviour...

Hell knows; could a long linear piece of code with LDL_L near the point where
it runs out of space in block end up with QEMU switching to different cpu
before we reach the matching STL_C?  If so, there might be problems; on actual
hardware

CPU1: LDL_L reads 0
CPU2: store 1
...
CPU2: store 0
CPU1: STL_C
would have STL_C fail.  qemu implementation of those suckers will succeed.
I'm not sure if anything in the kernel is sensitive to that, but analysis
won't be fun...



reply via email to

[Prev in Thread] Current Thread [Next in Thread]