|
From: | Stefan Priebe |
Subject: | Re: [Qemu-devel] kernel 4.4.2: kvm_irq_delivery_to_api / rwsem_down_read_failed |
Date: | Mon, 22 Feb 2016 20:35:41 +0100 |
User-agent: | Mozilla/5.0 (Windows NT 10.0; rv:38.0) Gecko/20100101 Thunderbird/38.5.0 |
Am 22.02.2016 um 18:36 schrieb Paolo Bonzini:
On 20/02/2016 11:44, Stefan Priebe wrote:Hi, while testing Kernel 4.4.2 and starting 20 Qemu 2.4.1 virtual machines. I got those traces and a load of 500 on those system. I was only abler to recover by sysrq-trigger.It seems like something happening at the VM level. A task took the mm semaphore and hung everyone else. Difficult to debug without a core (and without knowing who held the semaphore). Sorry.
OK thank you anyway. Is there anything i can do if this happens again? Stefan
PaoloAll traces: INFO: task pvedaemon worke:7470 blocked for more than 120 seconds. Not tainted 4.4.2+1-ph #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. pvedaemon worke D ffff88239c367ca0 0 7470 7468 0x00080000 ffff88239c367ca0 ffff8840a6232500 ffff8823ed83a500 ffff88239c367c90 ffff88239c368000 ffff8845f5f070e8 ffff8845f5f07100 0000000000000000 00007ffc73b48e58 ffff88239c367cc0 ffffffffb66a4d89 ffff88239c367cf0 Call Trace: [<ffffffffb66a4d89>] schedule+0x39/0x80 [<ffffffffb66a7447>] rwsem_down_read_failed+0xc7/0x120 [<ffffffffb63cb594>] call_rwsem_down_read_failed+0x14/0x30 [<ffffffffb66a6af7>] ? down_read+0x17/0x20 [<ffffffffb617b99e>] __access_remote_vm+0x3e/0x1c0 [<ffffffffb63cb594>] ? call_rwsem_down_read_failed+0x14/0x30 [<ffffffffb6181d5f>] access_remote_vm+0x1f/0x30 [<ffffffffb623212e>] proc_pid_cmdline_read+0x16e/0x4f0 [<ffffffffb611a4cc>] ? acct_account_cputime+0x1c/0x20 [<ffffffffb61c8348>] __vfs_read+0x18/0x40 [<ffffffffb61c94fe>] vfs_read+0x8e/0x140 [<ffffffffb61c95ff>] SyS_read+0x4f/0xa0 [<ffffffffb66a892e>] entry_SYSCALL_64_fastpath+0x12/0x71 INFO: task pvestatd:7633 blocked for more than 120 seconds. Not tainted 4.4.2+1-ph #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. pvestatd D ffff88239f16fd40 0 7633 1 0x00080000 ffff88239f16fd40 ffff8824e76a8000 ffff8823e5fc2500 ffff8823e5fc2500 ffff88239f170000 ffff8845f5f070e8 ffff8845f5f07100 ffff8845f5f07080 000000000341bf10 ffff88239f16fd60 ffffffffb66a4d89 024000d000000058 Call Trace: [<ffffffffb66a4d89>] schedule+0x39/0x80 [<ffffffffb66a7447>] rwsem_down_read_failed+0xc7/0x120 [<ffffffffb63cb594>] call_rwsem_down_read_failed+0x14/0x30 [<ffffffffb66a6af7>] ? down_read+0x17/0x20 [<ffffffffb623206c>] proc_pid_cmdline_read+0xac/0x4f0 [<ffffffffb611a4cc>] ? acct_account_cputime+0x1c/0x20 [<ffffffffb60b1f23>] ? account_user_time+0x73/0x80 [<ffffffffb60b249e>] ? vtime_account_user+0x4e/0x70 [<ffffffffb61c8348>] __vfs_read+0x18/0x40 [<ffffffffb61c94fe>] vfs_read+0x8e/0x140 [<ffffffffb61c95ff>] SyS_read+0x4f/0xa0 [<ffffffffb66a892e>] entry_SYSCALL_64_fastpath+0x12/0x71
[Prev in Thread] | Current Thread | [Next in Thread] |