[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Qemu-devel] [PATCH v4 06/13] target/ppc: Optimize emulation of vclzh an
From: |
Stefan Brankovic |
Subject: |
[Qemu-devel] [PATCH v4 06/13] target/ppc: Optimize emulation of vclzh and vclzb instructions |
Date: |
Thu, 27 Jun 2019 12:56:18 +0200 |
Optimize Altivec instruction vclzh (Vector Count Leading Zeros Halfword).
This instruction counts the number of leading zeros of each halfword element
in source register and places result in the appropriate halfword element of
destination register.
In each iteration of outer for loop count operation is performed on one
doubleword element of source register vB. In the first iteration, higher
doubleword element of vB is placed in variable avr, and then counting
for every halfword element is performed by using tcg_gen_clzi_i64.
Since it counts leading zeros on 64 bit lenght, ith byte element has to
be moved to the highest 16 bits of tmp, or-ed with mask(in order to get all
ones in lowest 48 bits), then perform tcg_gen_clzi_i64 and move it's result
in appropriate halfword element of result. This is done in inner for loop.
After the operation is finished, the result is saved in the appropriate
doubleword element of destination register vD. The same sequence of orders
is to be applied again for the lower doubleword element of vB.
Optimize Altivec instruction vclzb (Vector Count Leading Zeros Byte).
This instruction counts the number of leading zeros of each byte element
in source register and places result in the appropriate byte element of
destination register.
In each iteration of the outer for loop, counting operation is done on one
doubleword element of source register vB. In the first iteration, the
higher doubleword element of vB is placed in variable avr, and then counting
for every byte element is performed using tcg_gen_clzi_i64. Since it counts
leading zeros on 64 bit lenght, ith byte element has to be moved to the highest
8 bits of variable tmp, or-ed with mask(in order to get all ones in the lowest
56 bits), then perform tcg_gen_clzi_i64 and move it's result in the appropriate
byte element of result. This is done in inner for loop. After the operation is
finished, the result is saved in the appropriate doubleword element of
destination
register vD. The same sequence of orders is to be applied again for the lower
doubleword element of vB.
Signed-off-by: Stefan Brankovic <address@hidden>
---
target/ppc/helper.h | 2 -
target/ppc/int_helper.c | 9 ---
target/ppc/translate/vmx-impl.inc.c | 122 +++++++++++++++++++++++++++++++++++-
3 files changed, 120 insertions(+), 13 deletions(-)
diff --git a/target/ppc/helper.h b/target/ppc/helper.h
index 4c5c359..ac1a5bd 100644
--- a/target/ppc/helper.h
+++ b/target/ppc/helper.h
@@ -304,8 +304,6 @@ DEF_HELPER_4(vcfsx, void, env, avr, avr, i32)
DEF_HELPER_4(vctuxs, void, env, avr, avr, i32)
DEF_HELPER_4(vctsxs, void, env, avr, avr, i32)
-DEF_HELPER_2(vclzb, void, avr, avr)
-DEF_HELPER_2(vclzh, void, avr, avr)
DEF_HELPER_2(vctzb, void, avr, avr)
DEF_HELPER_2(vctzh, void, avr, avr)
DEF_HELPER_2(vctzw, void, avr, avr)
diff --git a/target/ppc/int_helper.c b/target/ppc/int_helper.c
index cd25b66..3edf334 100644
--- a/target/ppc/int_helper.c
+++ b/target/ppc/int_helper.c
@@ -1821,15 +1821,6 @@ VUPK(lsw, s64, s32, UPKLO)
} \
}
-#define clzb(v) ((v) ? clz32((uint32_t)(v) << 24) : 8)
-#define clzh(v) ((v) ? clz32((uint32_t)(v) << 16) : 16)
-
-VGENERIC_DO(clzb, u8)
-VGENERIC_DO(clzh, u16)
-
-#undef clzb
-#undef clzh
-
#define ctzb(v) ((v) ? ctz32(v) : 8)
#define ctzh(v) ((v) ? ctz32(v) : 16)
#define ctzw(v) ctz32((v))
diff --git a/target/ppc/translate/vmx-impl.inc.c
b/target/ppc/translate/vmx-impl.inc.c
index 39c7839..fd25b7c 100644
--- a/target/ppc/translate/vmx-impl.inc.c
+++ b/target/ppc/translate/vmx-impl.inc.c
@@ -741,6 +741,124 @@ static void trans_vgbbd(DisasContext *ctx)
}
/*
+ * vclzb VRT,VRB - Vector Count Leading Zeros Byte
+ *
+ * Counting the number of leading zero bits of each byte element in source
+ * register and placing result in appropriate byte element of destination
+ * register.
+ */
+static void trans_vclzb(DisasContext *ctx)
+{
+ int VT = rD(ctx->opcode);
+ int VB = rB(ctx->opcode);
+ TCGv_i64 avr = tcg_temp_new_i64();
+ TCGv_i64 result = tcg_temp_new_i64();
+ TCGv_i64 tmp = tcg_temp_new_i64();
+ TCGv_i64 mask = tcg_const_i64(0xffffffffffffffULL);
+ int i, j;
+
+ for (i = 0; i < 2; i++) {
+ if (i == 0) {
+ /* Get high doubleword of vB in avr. */
+ get_avr64(avr, VB, true);
+ } else {
+ /* Get low doubleword of vB in avr. */
+ get_avr64(avr, VB, false);
+ }
+ /*
+ * Perform count for every byte element using tcg_gen_clzi_i64.
+ * Since it counts leading zeros on 64 bit lenght, we have to move
+ * ith byte element to highest 8 bits of tmp, or it with mask(so we get
+ * all ones in lowest 56 bits), then perform tcg_gen_clzi_i64 and move
+ * it's result in appropriate byte element of result.
+ */
+ tcg_gen_shli_i64(tmp, avr, 56);
+ tcg_gen_or_i64(tmp, tmp, mask);
+ tcg_gen_clzi_i64(result, tmp, 64);
+ for (j = 1; j < 7; j++) {
+ tcg_gen_shli_i64(tmp, avr, (7 - j) * 8);
+ tcg_gen_or_i64(tmp, tmp, mask);
+ tcg_gen_clzi_i64(tmp, tmp, 64);
+ tcg_gen_deposit_i64(result, result, tmp, j * 8, 8);
+ }
+ tcg_gen_or_i64(tmp, avr, mask);
+ tcg_gen_clzi_i64(tmp, tmp, 64);
+ tcg_gen_deposit_i64(result, result, tmp, 56, 8);
+ if (i == 0) {
+ /* Place result in high doubleword element of vD. */
+ set_avr64(VT, result, true);
+ } else {
+ /* Place result in low doubleword element of vD. */
+ set_avr64(VT, result, false);
+ }
+ }
+
+ tcg_temp_free_i64(avr);
+ tcg_temp_free_i64(result);
+ tcg_temp_free_i64(tmp);
+ tcg_temp_free_i64(mask);
+}
+
+/*
+ * vclzh VRT,VRB - Vector Count Leading Zeros Halfword
+ *
+ * Counting the number of leading zero bits of each halfword element in source
+ * register and placing result in appropriate halfword element of destination
+ * register.
+ */
+static void trans_vclzh(DisasContext *ctx)
+{
+ int VT = rD(ctx->opcode);
+ int VB = rB(ctx->opcode);
+ TCGv_i64 avr = tcg_temp_new_i64();
+ TCGv_i64 result = tcg_temp_new_i64();
+ TCGv_i64 tmp = tcg_temp_new_i64();
+ TCGv_i64 mask = tcg_const_i64(0xffffffffffffULL);
+ int i, j;
+
+ for (i = 0; i < 2; i++) {
+ if (i == 0) {
+ /* Get high doubleword element of vB in avr. */
+ get_avr64(avr, VB, true);
+ } else {
+ /* Get low doubleword element of vB in avr. */
+ get_avr64(avr, VB, false);
+ }
+ /*
+ * Perform count for every halfword element using tcg_gen_clzi_i64.
+ * Since it counts leading zeros on 64 bit lenght, we have to move
+ * ith byte element to highest 16 bits of tmp, or it with mask(so we
get
+ * all ones in lowest 48 bits), then perform tcg_gen_clzi_i64 and move
+ * it's result in appropriate halfword element of result.
+ */
+ tcg_gen_shli_i64(tmp, avr, 48);
+ tcg_gen_or_i64(tmp, tmp, mask);
+ tcg_gen_clzi_i64(result, tmp, 64);
+ for (j = 1; j < 3; j++) {
+ tcg_gen_shli_i64(tmp, avr, (3 - j) * 16);
+ tcg_gen_or_i64(tmp, tmp, mask);
+ tcg_gen_clzi_i64(tmp, tmp, 64);
+ tcg_gen_deposit_i64(result, result, tmp, j * 16, 16);
+ }
+ tcg_gen_or_i64(tmp, avr, mask);
+ tcg_gen_clzi_i64(tmp, tmp, 64);
+ tcg_gen_deposit_i64(result, result, tmp, 48, 16);
+ if (i == 0) {
+ /* Place result in high doubleword element of vD. */
+ set_avr64(VT, result, true);
+ } else {
+ /* Place result in low doubleword element of vD. */
+ set_avr64(VT, result, false);
+ }
+ }
+
+ tcg_temp_free_i64(avr);
+ tcg_temp_free_i64(result);
+ tcg_temp_free_i64(tmp);
+ tcg_temp_free_i64(mask);
+}
+
+/*
* vclzw VRT,VRB - Vector Count Leading Zeros Word
*
* Counting the number of leading zero bits of each word element in source
@@ -1305,8 +1423,8 @@ GEN_VAFORM_PAIRED(vmsumshm, vmsumshs, 20)
GEN_VAFORM_PAIRED(vsel, vperm, 21)
GEN_VAFORM_PAIRED(vmaddfp, vnmsubfp, 23)
-GEN_VXFORM_NOA(vclzb, 1, 28)
-GEN_VXFORM_NOA(vclzh, 1, 29)
+GEN_VXFORM_TRANS(vclzb, 1, 28)
+GEN_VXFORM_TRANS(vclzh, 1, 29)
GEN_VXFORM_TRANS(vclzw, 1, 30)
GEN_VXFORM_TRANS(vclzd, 1, 31)
GEN_VXFORM_NOA_2(vnegw, 1, 24, 6)
--
2.7.4
- [Qemu-devel] [PATCH v4 00/13] target/ppc, tcg, tcg/i386: Optimize emulation of some Altivec instructions, Stefan Brankovic, 2019/06/27
- [Qemu-devel] [PATCH v4 04/13] target/ppc: Optimize emulation of vclzd instruction, Stefan Brankovic, 2019/06/27
- [Qemu-devel] [PATCH v4 02/13] target/ppc: Optimize emulation of vsl and vsr instructions, Stefan Brankovic, 2019/06/27
- [Qemu-devel] [PATCH v4 01/13] target/ppc: Optimize emulation of lvsl and lvsr instructions, Stefan Brankovic, 2019/06/27
- [Qemu-devel] [PATCH v4 03/13] target/ppc: Optimize emulation of vgbbd instruction, Stefan Brankovic, 2019/06/27
- [Qemu-devel] [PATCH v4 05/13] target/ppc: Optimize emulation of vclzw instruction, Stefan Brankovic, 2019/06/27
- [Qemu-devel] [PATCH v4 06/13] target/ppc: Optimize emulation of vclzh and vclzb instructions,
Stefan Brankovic <=
- [Qemu-devel] [PATCH v4 09/13] tcg/i386: Implement vector vmrgh instructions, Stefan Brankovic, 2019/06/27
- [Qemu-devel] [PATCH v4 10/13] target/ppc: convert vmrgh instructions to vector operations, Stefan Brankovic, 2019/06/27
- [Qemu-devel] [PATCH v4 07/13] target/ppc: Refactor emulation of vmrgew and vmrgow instructions, Stefan Brankovic, 2019/06/27
- [Qemu-devel] [PATCH v4 11/13] tcg: Add opcodes for verctor vmrgl instructions, Stefan Brankovic, 2019/06/27
- [Qemu-devel] [PATCH v4 12/13] tcg/i386: Implement vector vmrgl instructions, Stefan Brankovic, 2019/06/27
- [Qemu-devel] [PATCH v4 13/13] target/ppc: convert vmrgl instructions to vector operations, Stefan Brankovic, 2019/06/27
- [Qemu-devel] [PATCH v4 08/13] tcg: Add opcodes for vector vmrgh instructions, Stefan Brankovic, 2019/06/27
- Re: [Qemu-devel] [PATCH v4 00/13] target/ppc, tcg, tcg/i386: Optimize emulation of some Altivec instructions, Howard Spoelstra, 2019/06/27