qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH 0/4] Improve -icount, fix it with iothread


From: Paolo Bonzini
Subject: [Qemu-devel] Re: [PATCH 0/4] Improve -icount, fix it with iothread
Date: Fri, 25 Feb 2011 20:33:56 +0100
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.13) Gecko/20101209 Fedora/3.1.7-0.35.b3pre.fc14 Lightning/1.0b3pre Mnenhy/0.8.3 Thunderbird/3.1.7

On 02/23/2011 12:39 PM, Jan Kiszka wrote:
> You should try to trace the event flow in qemu, either via strace, via
> the built-in tracer (which likely requires a bit more tracepoints), or
> via a system-level tracer (ftrace / kernelshark).

The apparent problem is that 25% of cycles is spent in mutex locking and
unlocking.  But in fact, the real problem is that 90% of the time is
spent doing something else than executing code.

QEMU exits _a lot_ due to the vm_clock timers.  The deadlines are rarely more
than a few ms ahead, and at 1 MIPS that leaves room for executing a few
thousand instructions for each context switch.  The iothread overhead
is what makes the situation so bad, because it takes a lot more time to
execute those instructions.

We do so many (almost) useless passes through cpu_exec_all that even
microoptimization helps, for example this:

--- a/cpus.c
+++ b/cpus.c
@@ -767,10 +767,6 @@ static void qemu_wait_io_event_common(CPUState *env)
 {
     CPUState *env;
 
-    while (all_cpu_threads_idle()) {
-        qemu_cond_timedwait(tcg_halt_cond, &qemu_global_mutex, 1000);
-    }
-
     qemu_mutex_unlock(&qemu_global_mutex);
 
     /*
@@ -1110,7 +1111,15 @@ bool cpu_exec_all(void)
         }
     }
     exit_request = 0;
+
+#ifdef CONFIG_IOTHREAD
+    while (all_cpu_threads_idle()) {
+       qemu_cond_timedwait(tcg_halt_cond, &qemu_global_mutex, 1000);
+    }
+    return true;
+#else
     return !all_cpu_threads_idle();
+#endif
 }
 
 void set_numa_modes(void)

is enough to cut all_cpu_threads_idle from 9 to 4.5% (not unexpected: the
number of calls is halved).  But it shouldn't be that high anyway, so
I'm not proposing the patch formally.

Additionally, the fact that the execution is 99.99% lockstep means you cannot
really overlap any part of the I/O and VCPU threads.

I found a couple of inaccuracies in my patches that already cut 50% of the
time, though.

> Did my patches contribute a bit to overhead reduction? They specifically
> target the costly vcpu/iothread switches in TCG mode (caused by TCGs
> excessive lock-holding times).

Yes, they cut 15%.

Paolo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]