qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-arm] [Qemu-devel] [PATCH v2 43/67] target/arm: Implement SVE F


From: Richard Henderson
Subject: Re: [Qemu-arm] [Qemu-devel] [PATCH v2 43/67] target/arm: Implement SVE Floating Point Arithmetic - Unpredicated Group
Date: Fri, 23 Feb 2018 13:15:58 -0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0

On 02/23/2018 09:25 AM, Peter Maydell wrote:
> On 17 February 2018 at 18:22, Richard Henderson
> <address@hidden> wrote:
>> Signed-off-by: Richard Henderson <address@hidden>
>> ---
>>  target/arm/helper-sve.h    | 14 +++++++
>>  target/arm/helper.h        | 19 ++++++++++
>>  target/arm/translate-sve.c | 41 ++++++++++++++++++++
>>  target/arm/vec_helper.c    | 94 
>> ++++++++++++++++++++++++++++++++++++++++++++++
>>  target/arm/Makefile.objs   |  2 +-
>>  target/arm/sve.decode      | 10 +++++
>>  6 files changed, 179 insertions(+), 1 deletion(-)
>>  create mode 100644 target/arm/vec_helper.c
>>
> 
>> +/* Floating-point trigonometric starting value.
>> + * See the ARM ARM pseudocode function FPTrigSMul.
>> + */
>> +static float16 float16_ftsmul(float16 op1, uint16_t op2, float_status *stat)
>> +{
>> +    float16 result = float16_mul(op1, op1, stat);
>> +    if (!float16_is_any_nan(result)) {
>> +        result = float16_set_sign(result, op2 & 1);
>> +    }
>> +    return result;
>> +}
>> +
>> +static float32 float32_ftsmul(float32 op1, uint32_t op2, float_status *stat)
>> +{
>> +    float32 result = float32_mul(op1, op1, stat);
>> +    if (!float32_is_any_nan(result)) {
>> +        result = float32_set_sign(result, op2 & 1);
>> +    }
>> +    return result;
>> +}
>> +
>> +static float64 float64_ftsmul(float64 op1, uint64_t op2, float_status *stat)
>> +{
>> +    float64 result = float64_mul(op1, op1, stat);
>> +    if (!float64_is_any_nan(result)) {
>> +        result = float64_set_sign(result, op2 & 1);
>> +    }
>> +    return result;
>> +}
>> +
>> +#define DO_3OP(NAME, FUNC, TYPE) \
>> +void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
>> +{                                                                          \
>> +    intptr_t i, oprsz = simd_oprsz(desc);                                  \
>> +    TYPE *d = vd, *n = vn, *m = vm;                                        \
>> +    for (i = 0; i < oprsz / sizeof(TYPE); i++) {                           \
>> +        d[i] = FUNC(n[i], m[i], stat);                                     \
>> +    }                                                                      \
>> +}
>> +
>> +DO_3OP(gvec_fadd_h, float16_add, float16)
>> +DO_3OP(gvec_fadd_s, float32_add, float32)
>> +DO_3OP(gvec_fadd_d, float64_add, float64)
>> +
>> +DO_3OP(gvec_fsub_h, float16_sub, float16)
>> +DO_3OP(gvec_fsub_s, float32_sub, float32)
>> +DO_3OP(gvec_fsub_d, float64_sub, float64)
>> +
>> +DO_3OP(gvec_fmul_h, float16_mul, float16)
>> +DO_3OP(gvec_fmul_s, float32_mul, float32)
>> +DO_3OP(gvec_fmul_d, float64_mul, float64)
>> +
>> +DO_3OP(gvec_ftsmul_h, float16_ftsmul, float16)
>> +DO_3OP(gvec_ftsmul_s, float32_ftsmul, float32)
>> +DO_3OP(gvec_ftsmul_d, float64_ftsmul, float64)
>> +
>> +#ifdef TARGET_AARCH64
> 
> This seems a bit odd given SVE is AArch64-only anyway...

Ah right.

The thing to notice here is that the helpers have been placed such that the
helpers can be shared with AA32 and AA64 AdvSIMD.  One call to one of these
would replace the 2-8 calls that we currently generate for such an operation.

I thought it better to plan ahead for that cleanup as opposed to moving them 
later.

Here you see where AA64 differs from AA32 (and in particular where the scalar
operation is also conditionalized).


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]