[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PULL 48/72] tcg/optimize: Use fold_masks_zs in fold_xor
From: |
Richard Henderson |
Subject: |
[PULL 48/72] tcg/optimize: Use fold_masks_zs in fold_xor |
Date: |
Tue, 24 Dec 2024 12:04:57 -0800 |
Avoid the use of the OptContext slots. Find TempOptInfo once.
Remove fold_masks as the function becomes unused.
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/optimize.c | 18 ++++++++----------
1 file changed, 8 insertions(+), 10 deletions(-)
diff --git a/tcg/optimize.c b/tcg/optimize.c
index 047cb5a1ee..d543266b8d 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -1077,11 +1077,6 @@ static bool fold_masks_s(OptContext *ctx, TCGOp *op,
uint64_t s_mask)
return fold_masks_zs(ctx, op, -1, s_mask);
}
-static bool fold_masks(OptContext *ctx, TCGOp *op)
-{
- return fold_masks_zs(ctx, op, ctx->z_mask, ctx->s_mask);
-}
-
/*
* An "affected" mask bit is 0 if and only if the result is identical
* to the first input. Thus if the entire mask is 0, the operation
@@ -2769,6 +2764,9 @@ static bool fold_tcg_st_memcopy(OptContext *ctx, TCGOp
*op)
static bool fold_xor(OptContext *ctx, TCGOp *op)
{
+ uint64_t z_mask, s_mask;
+ TempOptInfo *t1, *t2;
+
if (fold_const2_commutative(ctx, op) ||
fold_xx_to_i(ctx, op, 0) ||
fold_xi_to_x(ctx, op, 0) ||
@@ -2776,11 +2774,11 @@ static bool fold_xor(OptContext *ctx, TCGOp *op)
return true;
}
- ctx->z_mask = arg_info(op->args[1])->z_mask
- | arg_info(op->args[2])->z_mask;
- ctx->s_mask = arg_info(op->args[1])->s_mask
- & arg_info(op->args[2])->s_mask;
- return fold_masks(ctx, op);
+ t1 = arg_info(op->args[1]);
+ t2 = arg_info(op->args[2]);
+ z_mask = t1->z_mask | t2->z_mask;
+ s_mask = t1->s_mask & t2->s_mask;
+ return fold_masks_zs(ctx, op, z_mask, s_mask);
}
static bool fold_bitsel_vec(OptContext *ctx, TCGOp *op)
--
2.43.0
- [PULL 36/72] tcg/optimize: Distinguish simplification in fold_setcond_zmask, (continued)
- [PULL 36/72] tcg/optimize: Distinguish simplification in fold_setcond_zmask, Richard Henderson, 2024/12/24
- [PULL 37/72] tcg/optimize: Use fold_masks_z in fold_setcond, Richard Henderson, 2024/12/24
- [PULL 40/72] tcg/optimize: Use finish_folding in fold_cmp_vec, Richard Henderson, 2024/12/24
- [PULL 41/72] tcg/optimize: Use finish_folding in fold_cmpsel_vec, Richard Henderson, 2024/12/24
- [PULL 45/72] tcg/optimize: Use finish_folding in fold_sub, fold_sub_vec, Richard Henderson, 2024/12/24
- [PULL 43/72] tcg/optimize: Use fold_masks_zs, fold_masks_s in fold_shift, Richard Henderson, 2024/12/24
- [PULL 54/72] tcg/optimize: Move fold_cmp_vec, fold_cmpsel_vec into alphabetic sort, Richard Henderson, 2024/12/24
- [PULL 55/72] softfloat: Add float{16,32,64}_muladd_scalbn, Richard Henderson, 2024/12/24
- [PULL 57/72] target/sparc: Use float*_muladd_scalbn, Richard Henderson, 2024/12/24
- [PULL 59/72] softfloat: Add float_round_nearest_even_max, Richard Henderson, 2024/12/24
- [PULL 48/72] tcg/optimize: Use fold_masks_zs in fold_xor,
Richard Henderson <=
- [PULL 50/72] tcg/optimize: Use finish_folding as default in tcg_optimize, Richard Henderson, 2024/12/24
- [PULL 42/72] tcg/optimize: Use fold_masks_zs in fold_sextract, Richard Henderson, 2024/12/24
- [PULL 46/72] tcg/optimize: Use fold_masks_zs in fold_tcg_ld, Richard Henderson, 2024/12/24
- [PULL 47/72] tcg/optimize: Use finish_folding in fold_tcg_ld_memcopy, Richard Henderson, 2024/12/24
- [PULL 49/72] tcg/optimize: Use finish_folding in fold_bitsel_vec, Richard Henderson, 2024/12/24
- [PULL 53/72] tcg/optimize: Move fold_bitsel_vec into alphabetic sort, Richard Henderson, 2024/12/24
- [PULL 58/72] softfloat: Remove float_muladd_halve_result, Richard Henderson, 2024/12/24
- [PULL 60/72] softfloat: Add float_muladd_suppress_add_product_zero, Richard Henderson, 2024/12/24
- [PULL 61/72] target/hexagon: Use float32_mul in helper_sfmpy, Richard Henderson, 2024/12/24
- [PULL 64/72] target/hexagon: Use float32_muladd_scalbn for helper_sffma_sc, Richard Henderson, 2024/12/24