qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC v2] translate-all: protect code_gen_buffer with RC


From: Richard Henderson
Subject: Re: [Qemu-devel] [RFC v2] translate-all: protect code_gen_buffer with RCU
Date: Sun, 24 Apr 2016 11:12:23 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.7.1

On 04/23/2016 08:27 PM, Emilio G. Cota wrote:
[ Applies on top of bennee/mttcg/enable-mttcg-for-armv7-v1 after
reverting "translate-all: introduces tb_flush_safe". A trivial
conflict must be solved after applying. ]

This is a first attempt at making tb_flush not have to stop all CPUs.
There are issues as pointed out below, but this could be a good start.

Context:
   https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg04658.html
   https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg06942.html

I will again say that I don't believe that wasting all of this memory is as good as using locks -- tb_flush doesn't happen *that* often.

+static void map_static_code_gen_buffer(void *buf, size_t size)
+{
+    map_exec(buf, size);
+    map_none(buf + size, qemu_real_host_page_size);
+    qemu_madvise(buf, size, QEMU_MADV_HUGEPAGE);
+}

Nit: I know it's only startup, but there's no reason to make multiple map_exec or madvise calls. You can cover the entire buffer in one go, and then call map_none on the guard pages.

+#ifdef USE_STATIC_CODE_GEN_BUFFER
...
+#elif defined(_WIN32)
...
+#else /* UNIX, dynamically-allocated code buffer */
...
+#endif /* USE_STATIC_CODE_GEN_BUFFER */

I'm not keen on your dynamic allocation implementations. Why not split the one dynamic buffer the same way as the static buffer? We are talking about >= 256MB here, after all.

+    tcg_prologue_init(&tcg_ctx);

We have some global variables in the tcg backends that are initialized by tcg_prologue_init. I don't think we should be calling it again without locks being involved.

Of course, you don't have to call it again if you split one buffer. Then you also get to share the same rcu implementation.


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]