[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PULL 12/72] tcg/optimize: Use fold_masks_zs in fold_andc
From: |
Richard Henderson |
Subject: |
[PULL 12/72] tcg/optimize: Use fold_masks_zs in fold_andc |
Date: |
Tue, 24 Dec 2024 12:04:21 -0800 |
Avoid the use of the OptContext slots. Find TempOptInfo once.
Avoid double inversion of the value of second const operand.
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/optimize.c | 21 +++++++++++----------
1 file changed, 11 insertions(+), 10 deletions(-)
diff --git a/tcg/optimize.c b/tcg/optimize.c
index 4a5b52916a..2096d705bd 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -1330,7 +1330,8 @@ static bool fold_and(OptContext *ctx, TCGOp *op)
static bool fold_andc(OptContext *ctx, TCGOp *op)
{
- uint64_t z1;
+ uint64_t z_mask, s_mask;
+ TempOptInfo *t1, *t2;
if (fold_const2(ctx, op) ||
fold_xx_to_i(ctx, op, 0) ||
@@ -1339,24 +1340,24 @@ static bool fold_andc(OptContext *ctx, TCGOp *op)
return true;
}
- z1 = arg_info(op->args[1])->z_mask;
+ t1 = arg_info(op->args[1]);
+ t2 = arg_info(op->args[2]);
+ z_mask = t1->z_mask;
/*
* Known-zeros does not imply known-ones. Therefore unless
* arg2 is constant, we can't infer anything from it.
*/
- if (arg_is_const(op->args[2])) {
- uint64_t z2 = ~arg_info(op->args[2])->z_mask;
- if (fold_affected_mask(ctx, op, z1 & ~z2)) {
+ if (ti_is_const(t2)) {
+ uint64_t v2 = ti_const_val(t2);
+ if (fold_affected_mask(ctx, op, z_mask & v2)) {
return true;
}
- z1 &= z2;
+ z_mask &= ~v2;
}
- ctx->z_mask = z1;
- ctx->s_mask = arg_info(op->args[1])->s_mask
- & arg_info(op->args[2])->s_mask;
- return fold_masks(ctx, op);
+ s_mask = t1->s_mask & t2->s_mask;
+ return fold_masks_zs(ctx, op, z_mask, s_mask);
}
static bool fold_brcond(OptContext *ctx, TCGOp *op)
--
2.43.0
- [PULL 00/72] tcg patch queue, Richard Henderson, 2024/12/24
- [PULL 01/72] tests/tcg: Do not use inttypes.h in multiarch/system/memory.c, Richard Henderson, 2024/12/24
- [PULL 04/72] tcg/optimize: Split out fold_affected_mask, Richard Henderson, 2024/12/24
- [PULL 07/72] tcg/optimize: Augment s_mask from z_mask in fold_masks_zs, Richard Henderson, 2024/12/24
- [PULL 09/72] tcg/optimize: Use finish_folding in fold_add, fold_add_vec, fold_addsub2, Richard Henderson, 2024/12/24
- [PULL 10/72] tcg/optimize: Introduce const value accessors for TempOptInfo, Richard Henderson, 2024/12/24
- [PULL 06/72] tcg/optimize: Split out fold_masks_zs, Richard Henderson, 2024/12/24
- [PULL 12/72] tcg/optimize: Use fold_masks_zs in fold_andc,
Richard Henderson <=
- [PULL 14/72] tcg/optimize: Use fold_masks_zs in fold_count_zeros, Richard Henderson, 2024/12/24
- [PULL 16/72] tcg/optimize: Use fold_and and fold_masks_z in fold_deposit, Richard Henderson, 2024/12/24
- [PULL 19/72] tcg/optimize: Use finish_folding in fold_dup, fold_dup2, Richard Henderson, 2024/12/24
- [PULL 25/72] tcg/optimize: Use fold_masks_zs in fold_movcond, Richard Henderson, 2024/12/24
- [PULL 03/72] tcg/optimize: Split out finish_bb, finish_ebb, Richard Henderson, 2024/12/24
- [PULL 08/72] tcg/optimize: Change representation of s_mask, Richard Henderson, 2024/12/24
- [PULL 05/72] tcg/optimize: Copy mask writeback to fold_masks, Richard Henderson, 2024/12/24
- [PULL 13/72] tcg/optimize: Use fold_masks_zs in fold_bswap, Richard Henderson, 2024/12/24
- [PULL 15/72] tcg/optimize: Use fold_masks_z in fold_ctpop, Richard Henderson, 2024/12/24
- [PULL 17/72] tcg/optimize: Compute sign mask in fold_deposit, Richard Henderson, 2024/12/24