qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH V6 06/18] tcg: remove tcg_halt_cond global v


From: Frederic Konrad
Subject: Re: [Qemu-devel] [RFC PATCH V6 06/18] tcg: remove tcg_halt_cond global variable.
Date: Tue, 07 Jul 2015 15:17:26 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0

On 07/07/2015 14:27, Alex Bennée wrote:
Frederic Konrad <address@hidden> writes:

On 26/06/2015 17:02, Paolo Bonzini wrote:
On 26/06/2015 16:47, address@hidden wrote:
From: KONRAD Frederic <address@hidden>

This removes tcg_halt_cond global variable.
We need one QemuCond per virtual cpu for multithread TCG.

Signed-off-by: KONRAD Frederic <address@hidden>
<snip>
@@ -1068,7 +1065,7 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
                   qemu_clock_notify(QEMU_CLOCK_VIRTUAL);
               }
           }
-        qemu_tcg_wait_io_event();
+        qemu_tcg_wait_io_event(QTAILQ_FIRST(&cpus));
Does this work (for non-multithreaded TCG) if tcg_thread_fn is waiting
on the "wrong" condition variable?  For example if all CPUs are idle and
the second CPU wakes up, qemu_tcg_wait_io_event won't be kicked out of
the wait.

I think you need to have a CPUThread struct like this:

     struct CPUThread {
         QemuThread thread;
         QemuCond halt_cond;
     };

and in CPUState have a CPUThread * field instead of the thread and
halt_cond fields.

Then single-threaded TCG can point all CPUStates to the same instance of
the struct, while multi-threaded TCG can point each CPUState to a
different struct.

Paolo
Hmm probably not, though we didn't pay attention to keep the non MTTCG
working.
(which is probably not good).
<snip>

You may want to consider push a branch up to a github mirror and
enabling travis-ci on the repo. That way you'll at least know how broken
the rest of the tree is.

I appreciate we are still at the RFC stage here but it will probably pay
off in the long run to try and avoid breaking the rest of the tree ;-)

Good point :)

Fred



reply via email to

[Prev in Thread] Current Thread [Next in Thread]