qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] tcg: reworking tb_invalidated_flag


From: Richard Henderson
Subject: Re: [Qemu-devel] tcg: reworking tb_invalidated_flag
Date: Wed, 30 Mar 2016 12:08:32 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0

On 03/30/2016 10:08 AM, Sergey Fedorov wrote:
> As of catching tb_flush() in cpu_exec() there have been three approaches
> proposed.
> 
> The first approach is to get rid of 'tb_invalidated_flag' and use
> 'tb_flush_count'. Capture 'tb_flush_count' inside 'tb_lock' critical
> section of cpu_exec() and compare it on each execution loop iteration
> before trying to do tb_add_jump(). This would be simple and clear but it
> would cost an extra load from a shared variable 'tb_flush_count' each
> time we go over the execution loop.
> 
> The second approach is to make 'tb_invalidated_flag' per-CPU. This
> would be conceptually similar to what we have, but would give us thread
> safety. With this approach, we need to be careful to correctly clear and
> set the flag.
> 
> The third approach is to mark each individual TB as valid/invalid. This
> is what Emilio has in his MTTCG series [2]. Following this approach, we
> could have very clean code with no extra overhead on the hot path.
> However, it would require to mark all TBs as invalid on tb_flush().
> Given that tb_flush() is rare, it shouldn't be a significant overhead.
> Also, there could be several options how to mark TB valid/invalid:
> a dedicated flag could be introduced or some invalid value of
> pc/cs_base/flags could be used.

I'm not really fond of this third option.  Yes, tb_flush is rare on some
targets, but for those with software managed TLBs they're much more common.
See e.g. mips and sparc.

Even when tb_flush is rare, there can be 500k-1M TBs live when the flush does
happen.  I simply cannot imagine that individually touching 1M variables
performs as well as setting one global variable, or taking a global lock.


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]