qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] mttcg: Set jmp_env to handle exit from tb_gen_c


From: Pranith Kumar
Subject: Re: [Qemu-devel] [PATCH] mttcg: Set jmp_env to handle exit from tb_gen_code
Date: Tue, 21 Feb 2017 11:17:57 -0500
User-agent: mu4e 0.9.18; emacs 25.1.1

Alex Bennée writes:

> Pranith Kumar <address@hidden> writes:
>
>> Alex Bennée writes:
>>
>>> Pranith Kumar <address@hidden> writes:
>>>
>>>> tb_gen_code() can exit execution using cpu_exit_loop() when it cannot
>>>> allocate new tb's. To handle this, we need to properly set the jmp_env
>>>> pointer ahead of calling tb_gen_code().
>>>>
>>>> CC:Alex Bennée <address@hidden>
>>>> CC: Richard Henderson <address@hidden>
>>>> Signed-off-by: Pranith Kumar <address@hidden>
>>>> ---
>>>>  cpu-exec.c | 23 +++++++++++------------
>>>>  1 file changed, 11 insertions(+), 12 deletions(-)
>>>>
>>>> diff --git a/cpu-exec.c b/cpu-exec.c
>>>> index 97d79612d9..4b70988b24 100644
>>>> --- a/cpu-exec.c
>>>> +++ b/cpu-exec.c
>>>> @@ -236,23 +236,22 @@ static void cpu_exec_step(CPUState *cpu)
>>>>
>>>>      cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
>>>>      tb_lock();
>>>> -    tb = tb_gen_code(cpu, pc, cs_base, flags,
>>>> -                     1 | CF_NOCACHE | CF_IGNORE_ICOUNT);
>>>> -    tb->orig_tb = NULL;
>>>> -    tb_unlock();
>>>> -
>>>> -    cc->cpu_exec_enter(cpu);
>>>> -
>>>
>>> It occurs to me we are also diverging in our locking pattern from
>>> tb_find which takes mmap_lock first. This is a NOP for system emulation
>>> but needed for user-emulation (for which we can do cpu_exec_step but not
>>> cpu_exec_nocache).
>>
>> Right. So we have to take the mmap_lock() before calling
>> tb_gen_code(). However, this lock is released in the error path before 
>> calling
>> cpu_loop_exit() if allocation of a new tb fails. The following is what I have
>> after merging with the previous EXCP_ATOMIC handling patch.
>>
>> diff --git a/cpu-exec.c b/cpu-exec.c
>> index a8e04bffbf..2bb3ba3672 100644
>> --- a/cpu-exec.c
>> +++ b/cpu-exec.c
>> @@ -228,6 +228,7 @@ static void cpu_exec_nocache(CPUState *cpu, int 
>> max_cycles,
>>
>>  static void cpu_exec_step(CPUState *cpu)
>>  {
>> +    CPUClass *cc = CPU_GET_CLASS(cpu);
>>      CPUArchState *env = (CPUArchState *)cpu->env_ptr;
>>      TranslationBlock *tb;
>>      target_ulong cs_base, pc;
>> @@ -235,16 +236,24 @@ static void cpu_exec_step(CPUState *cpu)
>>
>>      cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
>>      tb_lock();
>> -    tb = tb_gen_code(cpu, pc, cs_base, flags,
>> -                     1 | CF_NOCACHE | CF_IGNORE_ICOUNT);
>> -    tb->orig_tb = NULL;
>> -    tb_unlock();
>> -    /* execute the generated code */
>> -    trace_exec_tb_nocache(tb, pc);
>> -    cpu_tb_exec(cpu, tb);
>> -    tb_lock();
>> -    tb_phys_invalidate(tb, -1);
>> -    tb_free(tb);
>> +    if (sigsetjmp(cpu->jmp_env, 0) == 0) {
>> +        mmap_lock();
>
> That gets the locking order the wrong way around - I'm wary of that.
>

But we are in exclusive execution now, and we are releasing all the lock taken
before coming out of it. I think that should be OK. If your concern is that
this is not the normal locking pattern, then I agree. Otherwise, it should be 
fine.

-- 
Pranith



reply via email to

[Prev in Thread] Current Thread [Next in Thread]