[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [RFC 00/10] MultiThread TCG.
From: |
Emilio G. Cota |
Subject: |
Re: [Qemu-devel] [RFC 00/10] MultiThread TCG. |
Date: |
Tue, 28 Apr 2015 13:49:14 -0400 |
User-agent: |
Mutt/1.5.21 (2010-09-15) |
On Tue, Apr 28, 2015 at 11:06:37 +0200, Paolo Bonzini wrote:
> On 27/04/2015 19:06, Emilio G. Cota wrote:
> > Note that I'm running with -smp 1. My guess is that the iothread
> > is starved, since patch 472f4003 "Drop global lock during TCG code
> > execution"
> > removes from the iothread the ability to kick CPU threads.
>
> In theory that shouldn't be necessary anymore. The CPU thread should
> only hold the global lock for very small periods of time, similar to KVM.
You're right.
I added printouts around qemu_global_mutex_lock/unlock
and also added printouts around the cond_wait's that take the
BQL. The vcpu goes quiet after a while:
[...]
softmmu_template.h:io_writel:387 UNLO tid 17633
qemu/cputlb.c:tlb_protect_code:196 LOCK tid 17633
cputlb.c:tlb_protect_code:199 UNLO tid 17633
cputlb.c:tlb_protect_code:196 LOCK tid 17633
cputlb.c:tlb_protect_code:199 UNLO tid 17633
cputlb.c:tlb_protect_code:196 LOCK tid 17633
cputlb.c:tlb_protect_code:199 UNLO tid 17633
softmmu_template.h:io_readl:160 LOCK tid 17633
softmmu_template.h:io_readl:165 UNLO tid 17633
main-loop.c:os_host_main_loop_wait:242 LOCK tid 17630
main-loop.c:os_host_main_loop_wait:234 UNLO tid 17630
.. And at this point the last pair of LOCK/UNLO goes indefinitely.
> Can you post a backtrace?
$ sudo gdb --pid=8919
(gdb) info threads
Id Target Id Frame
3 Thread 0x7ffff596b700 (LWP 16204) "qemu-system-arm" syscall () at
../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
2 Thread 0x7ffff0f69700 (LWP 16206) "qemu-system-arm" 0x00007ffff33179fe
in ?? ()
* 1 Thread 0x7ffff7fe4a80 (LWP 16203) "qemu-system-arm" 0x00007ffff5e9b1ef
in __GI_ppoll (fds=0x5555569b8f70, nfds=4,
timeout=<optimized out>, sigmask=0x0) at
../sysdeps/unix/sysv/linux/ppoll.c:56
(gdb) bt
#0 0x00007ffff5e9b1ef in __GI_ppoll (fds=0x5555569b8f70, nfds=4,
timeout=<optimized out>, sigmask=0x0)
at ../sysdeps/unix/sysv/linux/ppoll.c:56
#1 0x00005555559a9e26 in qemu_poll_ns (fds=0x5555569b8f70, nfds=4,
timeout=9689027) at qemu-timer.c:326
#2 0x00005555559a8abb in os_host_main_loop_wait (timeout=9689027) at
main-loop.c:239
#3 0x00005555559a8bef in main_loop_wait (nonblocking=0) at main-loop.c:494
#4 0x000055555578c8c5 in main_loop () at vl.c:1803
#5 0x0000555555794634 in main (argc=16, argv=0x7fffffffe828,
envp=0x7fffffffe8b0) at vl.c:4371
(gdb) thread 3
[Switching to thread 3 (Thread 0x7ffff596b700 (LWP 16204))]
#0 syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
38 ../sysdeps/unix/sysv/linux/x86_64/syscall.S: No such file or directory.
(gdb) bt
#0 syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
#1 0x0000555555a3a061 in futex_wait (ev=0x555556392724 <rcu_call_ready_event>,
val=4294967295) at util/qemu-thread-posix.c:305
#2 0x0000555555a3a20b in qemu_event_wait (ev=0x555556392724
<rcu_call_ready_event>) at util/qemu-thread-posix.c:401
#3 0x0000555555a5011d in call_rcu_thread (opaque=0x0) at util/rcu.c:231
#4 0x00007ffff617b182 in start_thread (arg=0x7ffff596b700) at
pthread_create.c:312
#5 0x00007ffff5ea847d in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:111
(gdb) thread 2
[Switching to thread 2 (Thread 0x7ffff0f69700 (LWP 16206))]
#0 0x00007ffff33179fe in ?? ()
(gdb) bt
#0 0x00007ffff33179fe in ?? ()
#1 0x0000555555f0c200 in ?? ()
#2 0x00007fffd40029a0 in ?? ()
#3 0x00007fffd40029c0 in ?? ()
#4 0x13b33d1714a74c00 in ?? ()
#5 0x00007ffff0f685c0 in ?? ()
#6 0x00005555555f9da7 in tcg_out_reloc (s=<error reading variable: Cannot
access memory at address 0xffff8ab1>,
code_ptr=<error reading variable: Cannot access memory at address
0xffff8aa9>,
type=<error reading variable: Cannot access memory at address 0xffff8aa5>,
label_index=<error reading variable: Cannot access memory at address
0xffff8aa1>,
addend=<error reading variable: Cannot access memory at address
0xffff8a99>) at /local/home/cota/src/qemu/tcg/tcg.c:224
Backtrace stopped: previous frame inner to this frame (corrupt stack?)
(gdb) q
So it seems that the vcpu thread doesn't come out of the execution loop
from which that last io_readl was performed.
Emilio