[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PULL 16/72] tcg/optimize: Use fold_and and fold_masks_z in fold_deposit
From: |
Richard Henderson |
Subject: |
[PULL 16/72] tcg/optimize: Use fold_and and fold_masks_z in fold_deposit |
Date: |
Tue, 24 Dec 2024 12:04:25 -0800 |
Avoid the use of the OptContext slots. Find TempOptInfo once.
When we fold to and, use fold_and.
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/optimize.c | 35 +++++++++++++++++------------------
1 file changed, 17 insertions(+), 18 deletions(-)
diff --git a/tcg/optimize.c b/tcg/optimize.c
index 2f5030c899..c0f0390431 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -1625,14 +1625,17 @@ static bool fold_ctpop(OptContext *ctx, TCGOp *op)
static bool fold_deposit(OptContext *ctx, TCGOp *op)
{
+ TempOptInfo *t1 = arg_info(op->args[1]);
+ TempOptInfo *t2 = arg_info(op->args[2]);
+ int ofs = op->args[3];
+ int len = op->args[4];
TCGOpcode and_opc;
+ uint64_t z_mask;
- if (arg_is_const(op->args[1]) && arg_is_const(op->args[2])) {
- uint64_t t1 = arg_info(op->args[1])->val;
- uint64_t t2 = arg_info(op->args[2])->val;
-
- t1 = deposit64(t1, op->args[3], op->args[4], t2);
- return tcg_opt_gen_movi(ctx, op, op->args[0], t1);
+ if (ti_is_const(t1) && ti_is_const(t2)) {
+ return tcg_opt_gen_movi(ctx, op, op->args[0],
+ deposit64(ti_const_val(t1), ofs, len,
+ ti_const_val(t2)));
}
switch (ctx->type) {
@@ -1647,30 +1650,26 @@ static bool fold_deposit(OptContext *ctx, TCGOp *op)
}
/* Inserting a value into zero at offset 0. */
- if (arg_is_const_val(op->args[1], 0) && op->args[3] == 0) {
- uint64_t mask = MAKE_64BIT_MASK(0, op->args[4]);
+ if (ti_is_const_val(t1, 0) && ofs == 0) {
+ uint64_t mask = MAKE_64BIT_MASK(0, len);
op->opc = and_opc;
op->args[1] = op->args[2];
op->args[2] = arg_new_constant(ctx, mask);
- ctx->z_mask = mask & arg_info(op->args[1])->z_mask;
- return false;
+ return fold_and(ctx, op);
}
/* Inserting zero into a value. */
- if (arg_is_const_val(op->args[2], 0)) {
- uint64_t mask = deposit64(-1, op->args[3], op->args[4], 0);
+ if (ti_is_const_val(t2, 0)) {
+ uint64_t mask = deposit64(-1, ofs, len, 0);
op->opc = and_opc;
op->args[2] = arg_new_constant(ctx, mask);
- ctx->z_mask = mask & arg_info(op->args[1])->z_mask;
- return false;
+ return fold_and(ctx, op);
}
- ctx->z_mask = deposit64(arg_info(op->args[1])->z_mask,
- op->args[3], op->args[4],
- arg_info(op->args[2])->z_mask);
- return false;
+ z_mask = deposit64(t1->z_mask, ofs, len, t2->z_mask);
+ return fold_masks_z(ctx, op, z_mask);
}
static bool fold_divide(OptContext *ctx, TCGOp *op)
--
2.43.0
- [PULL 00/72] tcg patch queue, Richard Henderson, 2024/12/24
- [PULL 01/72] tests/tcg: Do not use inttypes.h in multiarch/system/memory.c, Richard Henderson, 2024/12/24
- [PULL 04/72] tcg/optimize: Split out fold_affected_mask, Richard Henderson, 2024/12/24
- [PULL 07/72] tcg/optimize: Augment s_mask from z_mask in fold_masks_zs, Richard Henderson, 2024/12/24
- [PULL 09/72] tcg/optimize: Use finish_folding in fold_add, fold_add_vec, fold_addsub2, Richard Henderson, 2024/12/24
- [PULL 10/72] tcg/optimize: Introduce const value accessors for TempOptInfo, Richard Henderson, 2024/12/24
- [PULL 06/72] tcg/optimize: Split out fold_masks_zs, Richard Henderson, 2024/12/24
- [PULL 12/72] tcg/optimize: Use fold_masks_zs in fold_andc, Richard Henderson, 2024/12/24
- [PULL 14/72] tcg/optimize: Use fold_masks_zs in fold_count_zeros, Richard Henderson, 2024/12/24
- [PULL 16/72] tcg/optimize: Use fold_and and fold_masks_z in fold_deposit,
Richard Henderson <=
- [PULL 19/72] tcg/optimize: Use finish_folding in fold_dup, fold_dup2, Richard Henderson, 2024/12/24
- [PULL 25/72] tcg/optimize: Use fold_masks_zs in fold_movcond, Richard Henderson, 2024/12/24
- [PULL 03/72] tcg/optimize: Split out finish_bb, finish_ebb, Richard Henderson, 2024/12/24
- [PULL 08/72] tcg/optimize: Change representation of s_mask, Richard Henderson, 2024/12/24
- [PULL 05/72] tcg/optimize: Copy mask writeback to fold_masks, Richard Henderson, 2024/12/24
- [PULL 13/72] tcg/optimize: Use fold_masks_zs in fold_bswap, Richard Henderson, 2024/12/24
- [PULL 15/72] tcg/optimize: Use fold_masks_z in fold_ctpop, Richard Henderson, 2024/12/24
- [PULL 17/72] tcg/optimize: Compute sign mask in fold_deposit, Richard Henderson, 2024/12/24
- [PULL 18/72] tcg/optimize: Use finish_folding in fold_divide, Richard Henderson, 2024/12/24
- [PULL 21/72] tcg/optimize: Use fold_masks_z in fold_extract, Richard Henderson, 2024/12/24