qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] tcg: reworking tb_invalidated_flag


From: Alex Bennée
Subject: Re: [Qemu-devel] tcg: reworking tb_invalidated_flag
Date: Thu, 31 Mar 2016 11:48:37 +0100
User-agent: mu4e 0.9.17; emacs 25.0.92.2

Sergey Fedorov <address@hidden> writes:

> Hi,
>
> This is a follow-up of the discussion [1]. The main focus is to move
> towards thread-safe TB invalidation and translation buffer flush.
> In addition, we can get more clean, readable and reliable code.
>
> First, I'd like to summarize how 'tb_invalidated_flag' is used.
> Basically, 'tb_invalidated_flag' is used to catch two events:
>  * some TB has been invalidated by tb_phys_invalidate();
>  * the whole translation buffer has been flushed by tb_flush().

I know we are system focused at the moment but does linux-user ever
flush groups of TBs, say when mappings change? Or does this trigger a
whole tb_flush?

> This is important because we need to be sure:
>  * the last executed TB can be safely patched by tb_add_jump() to
>    directly call the next one in cpu_exec();
>  * the original TB should be provided in 'tb->orig_tb' for further
>    possible invalidation along with the temporarily generated TB when in
>    cpu_exec_nocache().
>
> cpu_exec_nocache() is a simple case because it is not such a hot piece
> of code as cpu_exec() and it is only used in system mode which is not
> currently used from multiple threads (though it could be in MTTCG).
> Supposing it is safe to invalidate an already invalidated TB, it just
> needs to check if tb_flush() has been called during tb_gen_code(). It
> could be done by resetting 'tb_invalidated_flag' before calling
> tb_gen_code() and checking it afterwards. 'tb_lock' should be held. To
> make sure this doesn't affect other code relying on a value of the flag,
> we could just preserve it inside 'tb_lock' critical section.
>
> cpu_exec() case is a bit more subtle. Regarding tb_phys_invalidate(), it
> shouldn't be harmful if an invalidated TB get patched because it is not
> going to be executed anymore. It may only be a concern of efficiency.
> Though it doesn't seem to happen frequently.
>
> As of catching tb_flush() in cpu_exec() there have been three approaches
> proposed.
>
> The first approach is to get rid of 'tb_invalidated_flag' and use
> 'tb_flush_count'. Capture 'tb_flush_count' inside 'tb_lock' critical
> section of cpu_exec() and compare it on each execution loop iteration
> before trying to do tb_add_jump(). This would be simple and clear but it
> would cost an extra load from a shared variable 'tb_flush_count' each
> time we go over the execution loop.
>
> The second approach is to make 'tb_invalidated_flag' per-CPU. This
> would be conceptually similar to what we have, but would give us thread
> safety. With this approach, we need to be careful to correctly clear and
> set the flag.
>
> The third approach is to mark each individual TB as valid/invalid. This
> is what Emilio has in his MTTCG series [2]. Following this approach, we
> could have very clean code with no extra overhead on the hot path.
> However, it would require to mark all TBs as invalid on tb_flush().
> Given that tb_flush() is rare, it shouldn't be a significant overhead.

I'm with Richard on this, it sounds like something that should be
quantified. That said I'm sure there are mitigations at the cost of
complexity that could help.

Are there times when it might help with eliminating whole blocks of TBs
at a time? Or is a page at a time about it?

> Also, there could be several options how to mark TB valid/invalid:
> a dedicated flag could be introduced or some invalid value of
> pc/cs_base/flags could be used.
>
> So the question is, what would be the most appropriate solution?
>
> [1] http://lists.nongnu.org/archive/html/qemu-devel/2016-03/msg06180.html
> [2] http://lists.nongnu.org/archive/html/qemu-devel/2015-08/msg02582.html


--
Alex Bennée



reply via email to

[Prev in Thread] Current Thread [Next in Thread]