[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Qemu-commits] [qemu/qemu] 29f3ff: tcg/optimize: fix constant signedness
From: |
GitHub |
Subject: |
[Qemu-commits] [qemu/qemu] 29f3ff: tcg/optimize: fix constant signedness |
Date: |
Tue, 25 Aug 2015 07:30:04 -0700 |
Branch: refs/heads/master
Home: https://github.com/qemu/qemu
Commit: 29f3ff8d6cbc28f79933aeaa25805408d0984a8f
https://github.com/qemu/qemu/commit/29f3ff8d6cbc28f79933aeaa25805408d0984a8f
Author: Aurelien Jarno <address@hidden>
Date: 2015-08-24 (Mon, 24 Aug 2015)
Changed paths:
M tcg/optimize.c
Log Message:
-----------
tcg/optimize: fix constant signedness
By convention, on a 64-bit host TCG internally stores 32-bit constants
as sign-extended. This is not the case in the optimizer when a 32-bit
constant is folded.
This doesn't seem to have more consequences than suboptimal code
generation. For instance the x86 backend assumes sign-extended constants,
and in some rare cases uses a 32-bit unsigned immediate 0xffffffff
instead of a 8-bit signed immediate 0xff for the constant -1. This is
with a ppc guest:
before
------
---- 0x9f29cc
movi_i32 tmp1,$0xffffffff
movi_i32 tmp2,$0x0
add2_i32 tmp0,CA,CA,tmp2,r6,tmp2
add2_i32 tmp0,CA,tmp0,CA,tmp1,tmp2
mov_i32 r10,tmp0
0x7fd8c7dfe90c: xor %ebp,%ebp
0x7fd8c7dfe90e: mov %ebp,%r11d
0x7fd8c7dfe911: mov 0x18(%r14),%r9d
0x7fd8c7dfe915: add %r9d,%r10d
0x7fd8c7dfe918: adc %ebp,%r11d
0x7fd8c7dfe91b: add $0xffffffff,%r10d
0x7fd8c7dfe922: adc %ebp,%r11d
0x7fd8c7dfe925: mov %r11d,0x134(%r14)
0x7fd8c7dfe92c: mov %r10d,0x28(%r14)
after
-----
---- 0x9f29cc
movi_i32 tmp1,$0xffffffffffffffff
movi_i32 tmp2,$0x0
add2_i32 tmp0,CA,CA,tmp2,r6,tmp2
add2_i32 tmp0,CA,tmp0,CA,tmp1,tmp2
mov_i32 r10,tmp0
0x7f37010d490c: xor %ebp,%ebp
0x7f37010d490e: mov %ebp,%r11d
0x7f37010d4911: mov 0x18(%r14),%r9d
0x7f37010d4915: add %r9d,%r10d
0x7f37010d4918: adc %ebp,%r11d
0x7f37010d491b: add $0xffffffffffffffff,%r10d
0x7f37010d491f: adc %ebp,%r11d
0x7f37010d4922: mov %r11d,0x134(%r14)
0x7f37010d4929: mov %r10d,0x28(%r14)
Signed-off-by: Aurelien Jarno <address@hidden>
Message-Id: <address@hidden>
Signed-off-by: Richard Henderson <address@hidden>
Commit: 1208d7dd5fddc1fbd98de800d17429b4e5578848
https://github.com/qemu/qemu/commit/1208d7dd5fddc1fbd98de800d17429b4e5578848
Author: Aurelien Jarno <address@hidden>
Date: 2015-08-24 (Mon, 24 Aug 2015)
Changed paths:
M tcg/optimize.c
Log Message:
-----------
tcg/optimize: optimize temps tracking
The tcg_temp_info structure uses 24 bytes per temp. Now that we emulate
vector registers on most guests, it's not uncommon to have more than 100
used temps. This means we have initialize more than 2kB at least twice
per TB, often more when there is a few goto_tb.
Instead used a TCGTempSet bit array to track which temps are in used in
the current basic block. This means there are only around 16 bytes to
initialize.
This improves the boot time of a MIPS guest on an x86-64 host by around
7% and moves out tcg_optimize from the the top of the profiler list.
[rth: Handle TCG_CALL_DUMMY_ARG]
Signed-off-by: Aurelien Jarno <address@hidden>
Signed-off-by: Richard Henderson <address@hidden>
Commit: d9c769c60948815ee03b2684b1c1c68ee4375149
https://github.com/qemu/qemu/commit/d9c769c60948815ee03b2684b1c1c68ee4375149
Author: Aurelien Jarno <address@hidden>
Date: 2015-08-24 (Mon, 24 Aug 2015)
Changed paths:
M tcg/optimize.c
Log Message:
-----------
tcg/optimize: add temp_is_const and temp_is_copy functions
Add two accessor functions temp_is_const and temp_is_copy, to make the
code more readable and make code change easier.
Reviewed-by: Alex Bennée <address@hidden>
Signed-off-by: Aurelien Jarno <address@hidden>
Signed-off-by: Richard Henderson <address@hidden>
Commit: b41059dd9deec367a4ccd296659f0bc5de2dc705
https://github.com/qemu/qemu/commit/b41059dd9deec367a4ccd296659f0bc5de2dc705
Author: Aurelien Jarno <address@hidden>
Date: 2015-08-24 (Mon, 24 Aug 2015)
Changed paths:
M tcg/optimize.c
Log Message:
-----------
tcg/optimize: track const/copy status separately
Instead of using an enum which could be either a copy or a const, track
them separately. This will be used in the next patch.
Constants are tracked through a bool. Copies are tracked by initializing
temp's next_copy and prev_copy to itself, allowing to simplify the code
a bit.
Signed-off-by: Aurelien Jarno <address@hidden>
Signed-off-by: Richard Henderson <address@hidden>
Commit: 299f80130401153af1a6ddb3cc011781bcd47600
https://github.com/qemu/qemu/commit/299f80130401153af1a6ddb3cc011781bcd47600
Author: Aurelien Jarno <address@hidden>
Date: 2015-08-24 (Mon, 24 Aug 2015)
Changed paths:
M tcg/optimize.c
Log Message:
-----------
tcg/optimize: allow constant to have copies
Now that copies and constants are tracked separately, we can allow
constant to have copies, deferring the choice to use a register or a
constant to the register allocation pass. This prevent this kind of
regular constant reloading:
-OUT: [size=338]
+OUT: [size=298]
mov -0x4(%r14),%ebp
test %ebp,%ebp
jne 0x7ffbe9cb0ed6
mov $0x40002219f8,%rbp
mov %rbp,(%r14)
- mov $0x40002219f8,%rbp
mov $0x4000221a20,%rbx
mov %rbp,(%rbx)
mov $0x4000000000,%rbp
mov %rbp,(%r14)
- mov $0x4000000000,%rbp
mov $0x4000221d38,%rbx
mov %rbp,(%rbx)
mov $0x40002221a8,%rbp
mov %rbp,(%r14)
- mov $0x40002221a8,%rbp
mov $0x4000221d40,%rbx
mov %rbp,(%rbx)
mov $0x4000019170,%rbp
mov %rbp,(%r14)
- mov $0x4000019170,%rbp
mov $0x4000221d48,%rbx
mov %rbp,(%rbx)
mov $0x40000049ee,%rbp
mov %rbp,0x80(%r14)
mov %r14,%rdi
callq 0x7ffbe99924d0
mov $0x4000001680,%rbp
mov %rbp,0x30(%r14)
mov 0x10(%r14),%rbp
mov $0x4000001680,%rbp
mov %rbp,0x30(%r14)
mov 0x10(%r14),%rbp
shl $0x20,%rbp
mov (%r14),%rbx
mov %ebx,%ebx
mov %rbx,(%r14)
or %rbx,%rbp
mov %rbp,0x10(%r14)
mov %rbp,0x90(%r14)
mov 0x60(%r14),%rbx
mov %rbx,0x38(%r14)
mov 0x28(%r14),%rbx
mov $0x4000220e60,%r12
mov %rbx,(%r12)
mov $0x40002219c8,%rbx
mov %rbp,(%rbx)
mov 0x20(%r14),%rbp
sub $0x8,%rbp
mov $0x4000004a16,%rbx
mov %rbx,0x0(%rbp)
mov %rbp,0x20(%r14)
mov $0x19,%ebp
mov %ebp,0xa8(%r14)
mov $0x4000015110,%rbp
mov %rbp,0x80(%r14)
xor %eax,%eax
jmpq 0x7ffbebcae426
lea -0x5f6d72a(%rip),%rax # 0x7ffbe3d437b3
jmpq 0x7ffbebcae426
Signed-off-by: Aurelien Jarno <address@hidden>
Signed-off-by: Richard Henderson <address@hidden>
Commit: 0632e555fc4d281d69cb08d98d500d96185b041f
https://github.com/qemu/qemu/commit/0632e555fc4d281d69cb08d98d500d96185b041f
Author: Aurelien Jarno <address@hidden>
Date: 2015-08-24 (Mon, 24 Aug 2015)
Changed paths:
M tcg/README
M tcg/aarch64/tcg-target.h
M tcg/i386/tcg-target.h
M tcg/ia64/tcg-target.h
M tcg/optimize.c
M tcg/ppc/tcg-target.h
M tcg/s390/tcg-target.h
M tcg/sparc/tcg-target.c
M tcg/sparc/tcg-target.h
M tcg/tcg-op.c
M tcg/tcg-opc.h
M tcg/tcg.h
M tcg/tci/tcg-target.h
Log Message:
-----------
tcg: rename trunc_shr_i32 into trunc_shr_i64_i32
The op is sometimes named trunc_shr_i32 and sometimes trunc_shr_i64_i32,
and the name in the README doesn't match the name offered to the
frontends.
Always use the long name to make it clear it is a size changing op.
Signed-off-by: Aurelien Jarno <address@hidden>
Signed-off-by: Richard Henderson <address@hidden>
Commit: 6acd2558fdb7dd9de6b10697914bdc1d75d624e5
https://github.com/qemu/qemu/commit/6acd2558fdb7dd9de6b10697914bdc1d75d624e5
Author: Aurelien Jarno <address@hidden>
Date: 2015-08-24 (Mon, 24 Aug 2015)
Changed paths:
M tcg/tcg-op.c
Log Message:
-----------
tcg: don't abuse TCG type in tcg_gen_trunc_shr_i64_i32
The tcg_gen_trunc_shr_i64_i32 function takes a 64-bit argument and
returns a 32-bit value. Directly call tcg_gen_op3 with the correct
types instead of calling tcg_gen_op3i_i32 and abusing the TCG types.
Signed-off-by: Aurelien Jarno <address@hidden>
Signed-off-by: Richard Henderson <address@hidden>
Commit: 4f2331e5b67af8172419eb1c8db510b497b30a7b
https://github.com/qemu/qemu/commit/4f2331e5b67af8172419eb1c8db510b497b30a7b
Author: Aurelien Jarno <address@hidden>
Date: 2015-08-24 (Mon, 24 Aug 2015)
Changed paths:
M tcg/aarch64/tcg-target.c
M tcg/i386/tcg-target.c
M tcg/ia64/tcg-target.c
M tcg/ppc/tcg-target.c
M tcg/s390/tcg-target.c
M tcg/sparc/tcg-target.c
M tcg/tcg-op.c
M tcg/tcg-opc.h
M tcg/tci/tcg-target.c
M tci.c
Log Message:
-----------
tcg: implement real ext_i32_i64 and extu_i32_i64 ops
Implement real ext_i32_i64 and extu_i32_i64 ops. They ensure that a
32-bit value is always converted to a 64-bit value and not propagated
through the register allocator or the optimizer.
Cc: Andrzej Zaborowski <address@hidden>
Cc: Alexander Graf <address@hidden>
Cc: Blue Swirl <address@hidden>
Cc: Stefan Weil <address@hidden>
Acked-by: Claudio Fontana <address@hidden>
Signed-off-by: Aurelien Jarno <address@hidden>
Signed-off-by: Richard Henderson <address@hidden>
Commit: 8bcb5c8f34f9215d4f88f388c7ff14c9bd5cecd3
https://github.com/qemu/qemu/commit/8bcb5c8f34f9215d4f88f388c7ff14c9bd5cecd3
Author: Aurelien Jarno <address@hidden>
Date: 2015-08-24 (Mon, 24 Aug 2015)
Changed paths:
M tcg/optimize.c
Log Message:
-----------
tcg/optimize: add optimizations for ext_i32_i64 and extu_i32_i64 ops
They behave the same as ext32s_i64 and ext32u_i64 from the constant
folding and zero propagation point of view, except that they can't
be replaced by a mov, so we don't compute the affected value.
Signed-off-by: Aurelien Jarno <address@hidden>
Signed-off-by: Richard Henderson <address@hidden>
Commit: 870ad1547ac53bc79c21d86cf453b3b20cc660a2
https://github.com/qemu/qemu/commit/870ad1547ac53bc79c21d86cf453b3b20cc660a2
Author: Aurelien Jarno <address@hidden>
Date: 2015-08-24 (Mon, 24 Aug 2015)
Changed paths:
M tcg/README
Log Message:
-----------
tcg: update README about size changing ops
Signed-off-by: Aurelien Jarno <address@hidden>
Signed-off-by: Richard Henderson <address@hidden>
Commit: 609ad70562793937257c89d07bf7c1370b9fc9aa
https://github.com/qemu/qemu/commit/609ad70562793937257c89d07bf7c1370b9fc9aa
Author: Richard Henderson <address@hidden>
Date: 2015-08-24 (Mon, 24 Aug 2015)
Changed paths:
M target-tricore/translate.c
M tcg/README
M tcg/aarch64/tcg-target.h
M tcg/i386/tcg-target.h
M tcg/ia64/tcg-target.h
M tcg/optimize.c
M tcg/ppc/tcg-target.h
M tcg/s390/tcg-target.h
M tcg/sparc/tcg-target.c
M tcg/sparc/tcg-target.h
M tcg/tcg-op.c
M tcg/tcg-op.h
M tcg/tcg-opc.h
M tcg/tcg.h
M tcg/tci/tcg-target.h
Log Message:
-----------
tcg: Split trunc_shr_i32 opcode into extr[lh]_i64_i32
Rather than allow arbitrary shift+trunc, only concern ourselves
with low and high parts. This is all that was being used anyway.
Signed-off-by: Richard Henderson <address@hidden>
Commit: ecc7b3aa71f5fdcf9ee87e74ca811d988282641d
https://github.com/qemu/qemu/commit/ecc7b3aa71f5fdcf9ee87e74ca811d988282641d
Author: Richard Henderson <address@hidden>
Date: 2015-08-24 (Mon, 24 Aug 2015)
Changed paths:
M target-alpha/translate.c
M target-arm/translate-a64.c
M target-arm/translate.c
M target-cris/translate.c
M target-m68k/translate.c
M target-microblaze/translate.c
M target-mips/translate.c
M target-openrisc/translate.c
M target-s390x/translate.c
M target-sh4/translate.c
M target-sparc/translate.c
M target-tricore/translate.c
M target-xtensa/translate.c
M tcg/tcg-op.h
Log Message:
-----------
tcg: Remove tcg_gen_trunc_i64_i32
Replacing it with tcg_gen_extrl_i64_i32.
Signed-off-by: Richard Henderson <address@hidden>
Commit: 8cc580f6a0d8c0e2f590c1472cf5cd8e51761760
https://github.com/qemu/qemu/commit/8cc580f6a0d8c0e2f590c1472cf5cd8e51761760
Author: Aurelien Jarno <address@hidden>
Date: 2015-08-24 (Mon, 24 Aug 2015)
Changed paths:
M tcg/i386/tcg-target.c
Log Message:
-----------
tcg/i386: use softmmu fast path for unaligned accesses
Softmmu unaligned load/stores currently goes through through the slow
path for two reasons:
- to support unaligned access on host with strict alignement
- to correctly handle accesses crossing pages
x86 is only concerned by the second reason. Unaligned accesses are
avoided by compilers, but are not uncommon. We therefore would like
to see them going through the fast path, if they don't cross pages.
For that we can use the fact that two adjacent TLB entries can't contain
the same page. Therefore accessing the TLB entry corresponding to the
first byte, but comparing its content to page address of the last byte
ensures that we don't cross pages. We can do this check without adding
more instructions in the TLB code (but increasing its length by one
byte) by using the LEA instruction to combine the existing move with the
size addition.
On an x86-64 host, this gives a 3% boot time improvement for a powerpc
guest and 4% for an x86-64 guest.
[rth: Tidied calculation of the offset mask]
Signed-off-by: Aurelien Jarno <address@hidden>
Message-Id: <address@hidden>
Signed-off-by: Richard Henderson <address@hidden>
Commit: 68d45bb61c5bbfb3999486f78cf026c1e79eb301
https://github.com/qemu/qemu/commit/68d45bb61c5bbfb3999486f78cf026c1e79eb301
Author: Benjamin Herrenschmidt <address@hidden>
Date: 2015-08-24 (Mon, 24 Aug 2015)
Changed paths:
M tcg/ppc/tcg-target.c
Log Message:
-----------
tcg/ppc: Improve unaligned load/store handling on 64-bit backend
Currently, we get to the slow path for any unaligned access in the
backend, because we effectively preserve the bottom address bits
below the alignment requirement when comparing with the TLB entry,
so any non-0 bit there will cause the compare to fail.
For the same number of instructions, we can instead add the access
size - 1 to the address and stick to clearing all the bottom bits.
That means that normal unaligned accesses will not fallback (the HW
will handle them fine). Only when crossing a page boundary well we
end up having a mismatch because we'll end up pointing to the next
page which cannot possibly be in that same TLB entry.
Reviewed-by: Aurelien Jarno <address@hidden>
Signed-off-by: Benjamin Herrenschmidt <address@hidden>
Message-Id: <address@hidden>
Signed-off-by: Richard Henderson <address@hidden>
Commit: a5e39810b9088b5d20fac8e0293f281e1c8b608f
https://github.com/qemu/qemu/commit/a5e39810b9088b5d20fac8e0293f281e1c8b608f
Author: Richard Henderson <address@hidden>
Date: 2015-08-24 (Mon, 24 Aug 2015)
Changed paths:
M tcg/s390/tcg-target.c
Log Message:
-----------
tcg/s390: Use softmmu fast path for unaligned accesses
Signed-off-by: Richard Henderson <address@hidden>
Commit: 9ee14902bf107e37fb2c8119fa7bca424396237c
https://github.com/qemu/qemu/commit/9ee14902bf107e37fb2c8119fa7bca424396237c
Author: Richard Henderson <address@hidden>
Date: 2015-08-24 (Mon, 24 Aug 2015)
Changed paths:
M tcg/aarch64/tcg-target.c
Log Message:
-----------
tcg/aarch64: Use softmmu fast path for unaligned accesses
Signed-off-by: Richard Henderson <address@hidden>
Commit: 4cbea5986981998cda07b13794c7e3ff7bc42e80
https://github.com/qemu/qemu/commit/4cbea5986981998cda07b13794c7e3ff7bc42e80
Author: Laurent Vivier <address@hidden>
Date: 2015-08-24 (Mon, 24 Aug 2015)
Changed paths:
M bsd-user/elfload.c
M bsd-user/main.c
M bsd-user/qemu.h
M configure
M include/exec/cpu-all.h
M linux-user/elfload.c
M linux-user/main.c
M linux-user/mmap.c
M tcg/aarch64/tcg-target.c
M tcg/ia64/tcg-target.c
M tcg/ppc/tcg-target.c
M tcg/s390/tcg-target.c
M tcg/sparc/tcg-target.c
M translate-all.c
Log Message:
-----------
linux-user: remove --enable-guest-base/--disable-guest-base
All tcg host architectures now support the guest base and as
there is no real performance lost, it can be always enabled.
Anyway, guest base use can be disabled lively by setting guest
base to 0.
CONFIG_USE_GUEST_BASE is defined as (USE_GUEST_BASE && USER_ONLY),
it should have to be replaced by CONFIG_USER_ONLY in non CONFIG_USER_ONLY
parts, but as some other parts are using !CONFIG_SOFTMMU I have chosen to
use !CONFIG_SOFTMMU instead.
Reviewed-by: Alexander Graf <address@hidden>
Signed-off-by: Laurent Vivier <address@hidden>
Message-Id: <address@hidden>
Signed-off-by: Richard Henderson <address@hidden>
Commit: b76f21a70748b735d6ac84fec4bb9bdaafa339b1
https://github.com/qemu/qemu/commit/b76f21a70748b735d6ac84fec4bb9bdaafa339b1
Author: Laurent Vivier <address@hidden>
Date: 2015-08-24 (Mon, 24 Aug 2015)
Changed paths:
M include/exec/cpu-all.h
M include/exec/cpu_ldst.h
M linux-user/mmap.c
M tcg/aarch64/tcg-target.c
M tcg/arm/tcg-target.c
M tcg/i386/tcg-target.c
M tcg/ia64/tcg-target.c
M tcg/mips/tcg-target.c
M tcg/ppc/tcg-target.c
M tcg/s390/tcg-target.c
M tcg/sparc/tcg-target.c
Log Message:
-----------
linux-user: remove useless macros GUEST_BASE and RESERVED_VA
As we have removed CONFIG_USE_GUEST_BASE, we always use a guest base
and the macros GUEST_BASE and RESERVED_VA become useless: replace
them by their values.
Reviewed-by: Alexander Graf <address@hidden>
Signed-off-by: Laurent Vivier <address@hidden>
Message-Id: <address@hidden>
Signed-off-by: Richard Henderson <address@hidden>
Commit: 34a4450434f1a5daee06fca223afcbb9c8f1ee24
https://github.com/qemu/qemu/commit/34a4450434f1a5daee06fca223afcbb9c8f1ee24
Author: Peter Maydell <address@hidden>
Date: 2015-08-25 (Tue, 25 Aug 2015)
Changed paths:
M bsd-user/elfload.c
M bsd-user/main.c
M bsd-user/qemu.h
M configure
M include/exec/cpu-all.h
M include/exec/cpu_ldst.h
M linux-user/elfload.c
M linux-user/main.c
M linux-user/mmap.c
M target-alpha/translate.c
M target-arm/translate-a64.c
M target-arm/translate.c
M target-cris/translate.c
M target-m68k/translate.c
M target-microblaze/translate.c
M target-mips/translate.c
M target-openrisc/translate.c
M target-s390x/translate.c
M target-sh4/translate.c
M target-sparc/translate.c
M target-tricore/translate.c
M target-xtensa/translate.c
M tcg/README
M tcg/aarch64/tcg-target.c
M tcg/aarch64/tcg-target.h
M tcg/arm/tcg-target.c
M tcg/i386/tcg-target.c
M tcg/i386/tcg-target.h
M tcg/ia64/tcg-target.c
M tcg/ia64/tcg-target.h
M tcg/mips/tcg-target.c
M tcg/optimize.c
M tcg/ppc/tcg-target.c
M tcg/ppc/tcg-target.h
M tcg/s390/tcg-target.c
M tcg/s390/tcg-target.h
M tcg/sparc/tcg-target.c
M tcg/sparc/tcg-target.h
M tcg/tcg-op.c
M tcg/tcg-op.h
M tcg/tcg-opc.h
M tcg/tcg.h
M tcg/tci/tcg-target.c
M tcg/tci/tcg-target.h
M tci.c
M translate-all.c
Log Message:
-----------
Merge remote-tracking branch 'remotes/rth/tags/pull-tcg-20150824' into staging
queued tcg patches
# gpg: Signature made Mon 24 Aug 2015 19:37:15 BST using RSA key ID 4DD0279B
# gpg: Good signature from "Richard Henderson <address@hidden>"
# gpg: aka "Richard Henderson <address@hidden>"
# gpg: aka "Richard Henderson <address@hidden>"
* remotes/rth/tags/pull-tcg-20150824:
linux-user: remove useless macros GUEST_BASE and RESERVED_VA
linux-user: remove --enable-guest-base/--disable-guest-base
tcg/aarch64: Use softmmu fast path for unaligned accesses
tcg/s390: Use softmmu fast path for unaligned accesses
tcg/ppc: Improve unaligned load/store handling on 64-bit backend
tcg/i386: use softmmu fast path for unaligned accesses
tcg: Remove tcg_gen_trunc_i64_i32
tcg: Split trunc_shr_i32 opcode into extr[lh]_i64_i32
tcg: update README about size changing ops
tcg/optimize: add optimizations for ext_i32_i64 and extu_i32_i64 ops
tcg: implement real ext_i32_i64 and extu_i32_i64 ops
tcg: don't abuse TCG type in tcg_gen_trunc_shr_i64_i32
tcg: rename trunc_shr_i32 into trunc_shr_i64_i32
tcg/optimize: allow constant to have copies
tcg/optimize: track const/copy status separately
tcg/optimize: add temp_is_const and temp_is_copy functions
tcg/optimize: optimize temps tracking
tcg/optimize: fix constant signedness
Signed-off-by: Peter Maydell <address@hidden>
Compare: https://github.com/qemu/qemu/compare/a30878e708c2...34a4450434f1
[Prev in Thread] |
Current Thread |
[Next in Thread] |
- [Qemu-commits] [qemu/qemu] 29f3ff: tcg/optimize: fix constant signedness,
GitHub <=