[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH v2 3/4] tcg: Mask shift counts to avoid undefine
From: |
Stefan Weil |
Subject: |
Re: [Qemu-devel] [PATCH v2 3/4] tcg: Mask shift counts to avoid undefined behavior |
Date: |
Wed, 19 Mar 2014 07:21:55 +0100 |
User-agent: |
Mozilla/5.0 (X11; Linux i686; rv:24.0) Gecko/20100101 Thunderbird/24.3.0 |
Am 18.03.2014 22:30, schrieb Richard Henderson:
> TCG now requires unspecified behavior rather than a potential crash,
> bring the C shift within the letter of the law.
I know that C does not define the result of some shift / rotate
operations, but I don't understand the sentence above. Why does TCG or
TCI require unspecified behaviour now? Where was or is a potential crash?
The modifications below won't harm, but make the TCG interpreter slower.
Are they (all) necessary? Are there test cases which fail with the old code?
>
> Signed-off-by: Richard Henderson <address@hidden>
> ---
> tci.c | 20 ++++++++++----------
> 1 file changed, 10 insertions(+), 10 deletions(-)
>
> diff --git a/tci.c b/tci.c
> index 0202ed9..6523ab8 100644
> --- a/tci.c
> +++ b/tci.c
> @@ -669,32 +669,32 @@ uintptr_t tcg_qemu_tb_exec(CPUArchState *env, uint8_t
> *tb_ptr)
> t0 = *tb_ptr++;
> t1 = tci_read_ri32(&tb_ptr);
> t2 = tci_read_ri32(&tb_ptr);
> - tci_write_reg32(t0, t1 << t2);
> + tci_write_reg32(t0, t1 << (t2 & 31));
> break;
> case INDEX_op_shr_i32:
> t0 = *tb_ptr++;
> t1 = tci_read_ri32(&tb_ptr);
> t2 = tci_read_ri32(&tb_ptr);
> - tci_write_reg32(t0, t1 >> t2);
> + tci_write_reg32(t0, t1 >> (t2 & 31));
Right shifts of unsigned values with unsigned shift count are always
defined, aren't they? So masking for those cases should not be needed.
> break;
> case INDEX_op_sar_i32:
> t0 = *tb_ptr++;
> t1 = tci_read_ri32(&tb_ptr);
> t2 = tci_read_ri32(&tb_ptr);
> - tci_write_reg32(t0, ((int32_t)t1 >> t2));
> + tci_write_reg32(t0, ((int32_t)t1 >> (t2 & 31)));
> break;
> #if TCG_TARGET_HAS_rot_i32
> case INDEX_op_rotl_i32:
> t0 = *tb_ptr++;
> t1 = tci_read_ri32(&tb_ptr);
> t2 = tci_read_ri32(&tb_ptr);
> - tci_write_reg32(t0, rol32(t1, t2));
> + tci_write_reg32(t0, rol32(t1, t2 & 31));
What about other users of rol32?
> break;
> case INDEX_op_rotr_i32:
> t0 = *tb_ptr++;
> t1 = tci_read_ri32(&tb_ptr);
> t2 = tci_read_ri32(&tb_ptr);
> - tci_write_reg32(t0, ror32(t1, t2));
> + tci_write_reg32(t0, ror32(t1, t2 & 31));
> break;
> #endif
> #if TCG_TARGET_HAS_deposit_i32
> @@ -936,32 +936,32 @@ uintptr_t tcg_qemu_tb_exec(CPUArchState *env, uint8_t
> *tb_ptr)
> t0 = *tb_ptr++;
> t1 = tci_read_ri64(&tb_ptr);
> t2 = tci_read_ri64(&tb_ptr);
> - tci_write_reg64(t0, t1 << t2);
> + tci_write_reg64(t0, t1 << (t2 & 63));
> break;
> case INDEX_op_shr_i64:
> t0 = *tb_ptr++;
> t1 = tci_read_ri64(&tb_ptr);
> t2 = tci_read_ri64(&tb_ptr);
> - tci_write_reg64(t0, t1 >> t2);
> + tci_write_reg64(t0, t1 >> (t2 & 63));
> break;
> case INDEX_op_sar_i64:
> t0 = *tb_ptr++;
> t1 = tci_read_ri64(&tb_ptr);
> t2 = tci_read_ri64(&tb_ptr);
> - tci_write_reg64(t0, ((int64_t)t1 >> t2));
> + tci_write_reg64(t0, ((int64_t)t1 >> (t2 & 63)));
> break;
> #if TCG_TARGET_HAS_rot_i64
> case INDEX_op_rotl_i64:
> t0 = *tb_ptr++;
> t1 = tci_read_ri64(&tb_ptr);
> t2 = tci_read_ri64(&tb_ptr);
> - tci_write_reg64(t0, rol64(t1, t2));
> + tci_write_reg64(t0, rol64(t1, t2 & 63));
> break;
> case INDEX_op_rotr_i64:
> t0 = *tb_ptr++;
> t1 = tci_read_ri64(&tb_ptr);
> t2 = tci_read_ri64(&tb_ptr);
> - tci_write_reg64(t0, ror64(t1, t2));
> + tci_write_reg64(t0, ror64(t1, t2 & 63));
> break;
> #endif
> #if TCG_TARGET_HAS_deposit_i64
>
Regards
Stefan