qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/2] cpu-exec: remove tb_lock from the hot-path


From: Sergey Fedorov
Subject: Re: [Qemu-devel] [PATCH 2/2] cpu-exec: remove tb_lock from the hot-path
Date: Tue, 5 Jul 2016 19:04:25 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.8.0

On 05/07/16 19:00, Alex Bennée wrote:
> Sergey Fedorov <address@hidden> writes:
>
>> On 05/07/16 16:42, Paolo Bonzini wrote:
>>> On 05/07/2016 15:11, Alex Bennée wrote:
>>>> Paolo Bonzini <address@hidden> writes:
>>>>
>>>>> On 05/07/2016 13:14, Alex Bennée wrote:
>>>>>> /*
>>>>>>  * Patch the last TB with a jump to the current TB.
>>>>>>  *
>>>>>>  * Modification of the TB has to be protected with tb_lock which can
>>>>>>  * either be already held or taken here.
>>>>>>  */
>>>>>> static inline void maybe_patch_last_tb(CPUState *cpu,
>>>>>>                                        TranslationBlock *tb,
>>>>>>                                        TranslationBlock **last_tb,
>>>>>>                                        int tb_exit,
>>>>>>                                        bool locked)
>>>>>> {
>>>>>>     if (cpu->tb_flushed) {
>>>>>>         /* Ensure that no TB jump will be modified as the
>>>>>>          * translation buffer has been flushed.
>>>>>>          */
>>>>>>         *last_tb = NULL;
>>>>>>         cpu->tb_flushed = false;
>>>>>>     }
>>>>>> #ifndef CONFIG_USER_ONLY
>>>>>>     /* We don't take care of direct jumps when address mapping changes in
>>>>>>      * system emulation. So it's not safe to make a direct jump to a TB
>>>>>>      * spanning two pages because the mapping for the second page can 
>>>>>> change.
>>>>>>      */
>>>>>>     if (tb->page_addr[1] != -1) {
>>>>>>         *last_tb = NULL;
>>>>>>     }
>>>>>> #endif
>>>>>>     /* See if we can patch the calling TB. */
>>>>>>     if (*last_tb && !qemu_loglevel_mask(CPU_LOG_TB_NOCHAIN)) {
>>>>>>         if (!locked) {
>>>>>>             tb_lock();
>>>>>>         }
>>>>>>         tb_add_jump(*last_tb, tb_exit, tb);
>>>>>>         if (!locked) {
>>>>>>             tb_unlock();
>>>>>>         }
>>>>>>     }
>>>>>> }
>>>>> Why not add tb_lock_recursive() and tb_lock_reset()?
>>>> I thought we didn't like having recursive locking? I agree it would make
>>>> things a little neater though.
>>> I didn't like having recursive mutexes (because recursive mutexes
>>> encourage you to be sloppy).  Explicitly tagging some tb_lock()s as
>>> recursive is fine, though.  The explicit tag tells you to be careful.
>> We could also introduce mmap_lock_reset() similar to tb_lock_reset().
>> Then we can remove tb_unlock() and mmap_unlock() from tb_find_slow(),
>> call tb_lock() and mmap_lock() right before tb_add_jump(), and then call
>> tb_lock_reset() and mmap_lock_reset() at the end of tb_find_fast(). This
>> would render tb_lock() pretty useless though, since it would be always
>> held under mmap_lock().
> I'm about to post v2. I've put all this aggressive extension of the
> critical path as additional patches as I'm not sure it is really worth
> it.
>
> The big win is doing the tb_jmp_cache and first tb_find_physical lookups
> out of the lock. Trying to avoid bouncing the tb_lock() when patching
> doesn't seem to make any difference in my micro benchmarks.

I still have a feeling that we don't almost need both tb_lock() and
mmap_lock(). Extending tb_lock() across tb_gen_code() and tb_add_jump()
would be required should we decide to merge the locks. :)

Thanks,
Sergey



reply via email to

[Prev in Thread] Current Thread [Next in Thread]