[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PATCH v3 10/51] tcg/optimize: Use fold_masks_zs in fold_bswap
From: |
Richard Henderson |
Subject: |
[PATCH v3 10/51] tcg/optimize: Use fold_masks_zs in fold_bswap |
Date: |
Sun, 22 Dec 2024 08:24:05 -0800 |
Avoid the use of the OptContext slots. Find TempOptInfo once.
Always set s_mask along the BSWAP_OS path, since the result is
being explicitly sign-extended.
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/optimize.c | 20 +++++++++-----------
1 file changed, 9 insertions(+), 11 deletions(-)
diff --git a/tcg/optimize.c b/tcg/optimize.c
index d13001e53a..27b8f90453 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -1462,16 +1462,15 @@ static bool fold_brcond2(OptContext *ctx, TCGOp *op)
static bool fold_bswap(OptContext *ctx, TCGOp *op)
{
uint64_t z_mask, s_mask, sign;
+ TempOptInfo *t1 = arg_info(op->args[1]);
- if (arg_is_const(op->args[1])) {
- uint64_t t = arg_info(op->args[1])->val;
-
- t = do_constant_folding(op->opc, ctx->type, t, op->args[2]);
- return tcg_opt_gen_movi(ctx, op, op->args[0], t);
+ if (t1->is_const) {
+ return tcg_opt_gen_movi(ctx, op, op->args[0],
+ do_constant_folding(op->opc, ctx->type,
+ t1->val, op->args[2]));
}
- z_mask = arg_info(op->args[1])->z_mask;
-
+ z_mask = t1->z_mask;
switch (op->opc) {
case INDEX_op_bswap16_i32:
case INDEX_op_bswap16_i64:
@@ -1499,18 +1498,17 @@ static bool fold_bswap(OptContext *ctx, TCGOp *op)
/* If the sign bit may be 1, force all the bits above to 1. */
if (z_mask & sign) {
z_mask |= sign;
- s_mask = sign << 1;
}
+ /* The value and therefore s_mask is explicitly sign-extended. */
+ s_mask = sign;
break;
default:
/* The high bits are undefined: force all bits above the sign to 1. */
z_mask |= sign << 1;
break;
}
- ctx->z_mask = z_mask;
- ctx->s_mask = s_mask;
- return fold_masks(ctx, op);
+ return fold_masks_zs(ctx, op, z_mask, s_mask);
}
static bool fold_call(OptContext *ctx, TCGOp *op)
--
2.43.0
- [PATCH v3 32/51] tcg/optimize: Use finish_folding in fold_remainder, (continued)
- [PATCH v3 32/51] tcg/optimize: Use finish_folding in fold_remainder, Richard Henderson, 2024/12/22
- [PATCH v3 34/51] tcg/optimize: Use fold_masks_z in fold_setcond, Richard Henderson, 2024/12/22
- [PATCH v3 06/51] tcg/optimize: Change representation of s_mask, Richard Henderson, 2024/12/22
- [PATCH v3 35/51] tcg/optimize: Use fold_masks_s in fold_negsetcond, Richard Henderson, 2024/12/22
- [PATCH v3 29/51] tcg/optimize: Use fold_masks_zs in fold_orc, Richard Henderson, 2024/12/22
- [PATCH v3 33/51] tcg/optimize: Distinguish simplification in fold_setcond_zmask, Richard Henderson, 2024/12/22
- [PATCH v3 09/51] tcg/optimize: Use fold_masks_zs in fold_andc, Richard Henderson, 2024/12/22
- [PATCH v3 04/51] tcg/optimize: Split out fold_masks_zs, Richard Henderson, 2024/12/22
- [PATCH v3 07/51] tcg/optimize: Use finish_folding in fold_add, fold_add_vec, fold_addsub2, Richard Henderson, 2024/12/22
- [PATCH v3 10/51] tcg/optimize: Use fold_masks_zs in fold_bswap,
Richard Henderson <=
- [PATCH v3 16/51] tcg/optimize: Use finish_folding in fold_dup, fold_dup2, Richard Henderson, 2024/12/22
- [PATCH v3 12/51] tcg/optimize: Use fold_masks_z in fold_ctpop, Richard Henderson, 2024/12/22
- [PATCH v3 11/51] tcg/optimize: Use fold_masks_zs in fold_count_zeros, Richard Henderson, 2024/12/22
- [PATCH v3 15/51] tcg/optimize: Use finish_folding in fold_divide, Richard Henderson, 2024/12/22
- [PATCH v3 17/51] tcg/optimize: Use fold_masks_s in fold_eqv, Richard Henderson, 2024/12/22
- [PATCH v3 22/51] tcg/optimize: Use fold_masks_zs in fold_movcond, Richard Henderson, 2024/12/22
- [PATCH v3 25/51] tcg/optimize: Use fold_masks_z in fold_neg_no_const, Richard Henderson, 2024/12/22
- [PATCH v3 39/51] tcg/optimize: Use fold_masks_zs in fold_sextract, Richard Henderson, 2024/12/22
- [PATCH v3 27/51] tcg/optimize: Use fold_masks_s in fold_not, Richard Henderson, 2024/12/22
- [PATCH v3 38/51] tcg/optimize: Use finish_folding in fold_cmpsel_vec, Richard Henderson, 2024/12/22