qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] Adding multithreads to LDREX/STREX.


From: Mark Burton
Subject: Re: [Qemu-devel] [RFC] Adding multithreads to LDREX/STREX.
Date: Tue, 3 Mar 2015 18:22:51 +0100

> On 3 Mar 2015, at 18:09, Paolo Bonzini <address@hidden> wrote:
> 
> 
> 
> On 03/03/2015 17:47, Mark Burton wrote:
>> +inline void arm_exclusive_lock(void)
>> +{
>> +    if (!cpu_have_exclusive_lock) {
>> +        qemu_mutex_lock(&cpu_exclusive_lock);
>> +        cpu_have_exclusive_lock = true;
>> +    }
>> +}
>> +
>> +inline void arm_exclusive_unlock(void)
>> +{
>> +    if (cpu_have_exclusive_lock) {
>> +        cpu_have_exclusive_lock = false;
>> +        qemu_mutex_unlock(&cpu_exclusive_lock);
>> +    }
>> +}
>> +
> 
> This is almost but not quite a recursive mutex.  What about changing it
> like this:
> 
> - arm_exclusive_lock just takes the lock and sets the flag; no "if"
> 
> - arm_exclusive_unlock does the opposite, again no "if"
> 
> - raise_exception checks the flag and skips "arm_exclusive_lock()" if
> already set
> 

yes - thats better - though I would rather live without the if all-together. 
There are only a couple of places where they are ‘needed’ and I should have 
checked explicitly there — call it laziness ;-)

> The only other question I have is this:
> 
>> +    gen_helper_atomic_check(success, cpu_env, addr);
>> +    tcg_gen_brcondi_i32(TCG_COND_EQ, success, 0, done_label);
> 
> Are you setting cpu_R[rd] correctly in this case?  That is, should you
> be jumping to fail_label instead?  That could case a failure to be
> reported as a success.


I almost jumped on plain and came and kissed you —— almost…. 
but - actually we set R[rd] to 1 above the branch and only reset it to 0 
afterwards… so I think the functionality is correct… :-(((

thanks though - your help is much appreciated.

Cheers

Mark.



> 
> Paolo
> 
>> +    tcg_temp_free_i32(success);
>> +
>> +    /* Store shoudl be OK lets check the value */
>> +    tmp = load_reg(s, rt);
>> +    TCGv_i64 val64=tcg_temp_new_i64();
>>    switch (size) {
>>    case 0:
>>        gen_aa32_ld8u(tmp, addr, get_mem_index(s));
>> @@ -7450,21 +7447,19 @@ static void gen_store_exclusive(DisasContext *s,
>> int rd, int rt, int rt2,
>>        abort();
>>    }
>> 
>> -    val64 = tcg_temp_new_i64();
>>    if (size == 3) {
>>        TCGv_i32 tmp2 = tcg_temp_new_i32();
>>        TCGv_i32 tmp3 = tcg_temp_new_i32();
>> +
>>        tcg_gen_addi_i32(tmp2, addr, 4);
>>        gen_aa32_ld32u(tmp3, tmp2, get_mem_index(s));
>> -        tcg_temp_free_i32(tmp2);
>>        tcg_gen_concat_i32_i64(val64, tmp, tmp3);
>> -        tcg_temp_free_i32(tmp3);
>> +        tcg_temp_free_i32(tmp2);
>>    } else {
>> -        tcg_gen_extu_i32_i64(val64, tmp);
>> +      tcg_gen_extu_i32_i64(val64, tmp);
>>    }
>>    tcg_temp_free_i32(tmp);
>> -
>> -    tcg_gen_brcond_i64(TCG_COND_NE, val64, cpu_exclusive_val, fail_label);
>> +    tcg_gen_brcond_i64(TCG_COND_NE, cpu_exclusive_val, val64, fail_label);
>>    tcg_temp_free_i64(val64);
>> 
>>    tmp = load_reg(s, rt);
>> @@ -7489,12 +7484,16 @@ static void gen_store_exclusive(DisasContext *s,
>> int rd, int rt, int rt2,
>>        gen_aa32_st32(tmp, addr, get_mem_index(s));
>>        tcg_temp_free_i32(tmp);
>>    }
>> +
>> +    gen_helper_atomic_release(cpu_env);
>>    tcg_gen_movi_i32(cpu_R[rd], 0);
>>    tcg_gen_br(done_label);
>> +
>>    gen_set_label(fail_label);
>> +    gen_helper_atomic_release(cpu_env);
>>    tcg_gen_movi_i32(cpu_R[rd], 1);
>> +
>>    gen_set_label(done_label);
>> -    tcg_gen_movi_i64(cpu_exclusive_addr, -1);


         +44 (0)20 7100 3485 x 210
 +33 (0)5 33 52 01 77x 210

        +33 (0)603762104
        mark.burton




reply via email to

[Prev in Thread] Current Thread [Next in Thread]