qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-block] [Questions] NBD issue or CoMutex->holder issue?


From: Changlong Xie
Subject: [Qemu-block] [Questions] NBD issue or CoMutex->holder issue?
Date: Tue, 11 Oct 2016 18:35:22 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0

Hi all

Recently during test COLO, i encounter some problems. Here is the backtrace

qemu-system-x86_64: util/qemu-coroutine-lock.c:148: qemu_co_mutex_unlock: Assertion `mutex->holder == self' failed.
[New Thread 0x7fff0e7fc700 (LWP 20936)]

Program received signal SIGABRT, Aborted.
0x00007ffff5cac5d7 in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install glib2-2.40.0-4.el7.x86_64 glibc-2.17-78.el7.x86_64 gmp-6.0.0-11.el7.x86_64 gnutls-3.3.8-12.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.12.2-14.el7.x86_64 libcom_err-1.42.9-7.el7.x86_64 libffi-3.0.13-11.el7.x86_64 libgcc-4.8.3-9.el7.x86_64 libselinux-2.2.2-6.el7.x86_64 libstdc++-4.8.3-9.el7.x86_64 libtasn1-3.8-2.el7.x86_64 nettle-2.7.1-4.el7.x86_64 openssl-libs-1.0.1e-42.el7.x86_64 p11-kit-0.20.7-3.el7.x86_64 pcre-8.32-14.el7.x86_64 pixman-0.32.4-3.el7.x86_64 trousers-0.3.11.2-3.el7.x86_64 xz-libs-5.1.2-9alpha.el7.x86_64 zlib-1.2.7-13.el7.x86_64
(gdb) bt
#0  0x00007ffff5cac5d7 in raise () from /lib64/libc.so.6
#1  0x00007ffff5cadcc8 in abort () from /lib64/libc.so.6
#2  0x00007ffff5ca5546 in __assert_fail_base () from /lib64/libc.so.6
#3  0x00007ffff5ca55f2 in __assert_fail () from /lib64/libc.so.6
#4 0x0000555555bdb096 in qemu_co_mutex_unlock (mutex=0x555556899490) at util/qemu-coroutine-lock.c:148 #5 0x0000555555b5663f in nbd_coroutine_end (s=0x555556899450, request=0x7fffea5ff940) at block/nbd-client.c:227 #6 0x0000555555b56931 in nbd_client_co_pwritev (bs=0x555556890800, offset=4820434944, bytes=4096, qiov=0x55555688db80, flags=0) at block/nbd-client.c:288 #7 0x0000555555b4f8c7 in bdrv_driver_pwritev (bs=0x555556890800, offset=4820434944, bytes=4096, qiov=0x55555688db80, flags=0) at block/io.c:843 #8 0x0000555555b50e23 in bdrv_aligned_pwritev (bs=0x555556890800, req=0x7fffea5ffb70, offset=4820434944, bytes=4096, align=1, qiov=0x55555688db80, flags=0)
    at block/io.c:1322
#9 0x0000555555b51ac4 in bdrv_co_pwritev (child=0x55555698e020, offset=4820434944, bytes=4096, qiov=0x55555688db80, flags=0) at block/io.c:1571 #10 0x0000555555b51b8b in bdrv_co_do_writev (child=0x55555698e020, sector_num=9414912, nb_sectors=8, qiov=0x55555688db80, flags=0) at block/io.c:1595 #11 0x0000555555b51bda in bdrv_co_writev (child=0x55555698e020, sector_num=9414912, nb_sectors=8, qiov=0x55555688db80) at block/io.c:1604 #12 0x0000555555b60c61 in replication_co_writev (bs=0x555556d76b30, sector_num=9414912, remaining_sectors=8, qiov=0x55555688db80) at block/replication.c:250 #13 0x0000555555b4fa3a in bdrv_driver_pwritev (bs=0x555556d76b30, offset=4820434944, bytes=4096, qiov=0x55555688db80, flags=0) at block/io.c:862 #14 0x0000555555b50e23 in bdrv_aligned_pwritev (bs=0x555556d76b30, req=0x7fffea5ffef0, offset=4820434944, bytes=4096, align=512, qiov=0x55555688db80, flags=0)
    at block/io.c:1322
#15 0x0000555555b51ac4 in bdrv_co_pwritev (child=0x555556a5e160, offset=4820434944, bytes=4096, qiov=0x55555688db80, flags=0) at block/io.c:1571 #16 0x0000555555b52ae5 in bdrv_co_do_rw (opaque=0x5555575e7860) at block/io.c:2137 #17 0x0000555555bdb6bb in coroutine_trampoline (i0=1470076720, i1=21845) at util/coroutine-ucontext.c:79
#18 0x00007ffff5cbe0f0 in ?? () from /lib64/libc.so.6
#19 0x00007fffffffd320 in ?? ()

And I found you can easily reproduce it with nbd enable.

For nbd client, if request number is large than MAX_NBD_REQUESTS(16), we will queue the rest requests into free_sema->queue. When nbd client receives one reply, it will unlock free_sema, then pop the free_sema->queue head, so set free_sema->holder as
revelant coroutine.

For example if there are N(N=26 and MAX_NBD_REQUESTS=16) nbd write requests, so we'll invoke nbd_client_co_pwritev 26 times. time request No Actions
1         1           in_flight=1, Coroutine=C1
2         2           in_flight=2, Coroutine=C2
...       ...
15        15          in_flight=15, Coroutine=C15
16 16 in_flight=16, Coroutine=C16, free_sema->holder=C16, mutex->locked=true 17 17 in_flight=16, Coroutine=C17, queue C17 into free_sema->queue 18 18 in_flight=16, Coroutine=C18, queue C18 into free_sema->queue
...       ...
26 N in_flight=16, Coroutine=C26, queue C26 into free_sema->queue

Once nbd client recieves request No.16' reply, we will re-enter request C16. It's ok, because it's equal to 'free_sema->holder'.
time     request No   Actions
27 16 in_flight=15, Coroutine=C16, free_sema->holder=C16, mutex->locked=false

Then nbd_coroutine_end invokes qemu_co_mutex_unlock, what will pop coroutines from free_sema->queue's head and enter C17. More
free_sema->holder is C17 now.
time     request No   Actions
28 17 in_flight=16, Coroutine=C17, free_sema->holder=C17, mutex->locked=true

In above scenario, we only recieves request No.16' reply. So as time go on, nbd client will almostly recieves replies from requests 1 to 15 rather than request 17 who owns C17. In this case, we will encounter Assertion "`mutex->holder == self' failed" in nbd_coroutine_end.
For example, if nbd client recieves request No.15' reply:
time     request No      Actions
29 15(most case) in_flight=15, Coroutine=C15, free_sema->holder=C17, mutex->locked = false

qemu-system-x86_64: util/qemu-coroutine-lock.c:148: qemu_co_mutex_unlock: Assertion `mutex->holder == self' failed.

This is introduced by Kevin's patch
commit 0e438cdc932a785de72166af4641aafa103a6670
Author: Kevin Wolf <address@hidden>
Date:   Thu Aug 11 17:45:06 2016 +0200

    coroutine: Let CoMutex remember who holds it

    In cases of deadlocks, knowing who holds a given CoMutex is really
    helpful for debugging. Keeping the information around doesn't cost much
    and allows us to add another assertion to keep the code correct, so
    let's just add it.

    Signed-off-by: Kevin Wolf <address@hidden>
    Reviewed-by: Paolo Bonzini <address@hidden>
    Reviewed-by: Stefan Hajnoczi <address@hidden>

Any ideas? Is it a nbd bug or should we revert commit 0e438cdc?

Thanks
        -Xie





reply via email to

[Prev in Thread] Current Thread [Next in Thread]