qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [ceph-users] qemu-1.4.0 and onwards, linux kernel 3.2.x


From: Josh Durgin
Subject: Re: [Qemu-devel] [ceph-users] qemu-1.4.0 and onwards, linux kernel 3.2.x, ceph-RBD, heavy I/O leads to kernel_hung_tasks_timout_secs message and unresponsive qemu-process, [Bug 1207686]
Date: Sat, 10 Aug 2013 00:30:23 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130623 Thunderbird/17.0.7

On 08/09/2013 08:03 AM, Stefan Hajnoczi wrote:
On Fri, Aug 09, 2013 at 03:05:22PM +0100, Andrei Mikhailovsky wrote:
I can confirm that I am having similar issues with ubuntu vm guests using fio 
with bs=4k direct=1 numjobs=4 iodepth=16. Occasionally i see hang tasks, 
occasionally guest vm stops responding without leaving anything in the logs and 
sometimes i see kernel panic on the console. I typically leave the runtime of 
the fio test for 60 minutes and it tends to stop responding after about 10-30 
mins.

I am on ubuntu 12.04 with 3.5 kernel backport and using ceph 0.61.7 with qemu 
1.5.0 and libvirt 1.0.2

Oliver's logs show one aio_flush() never getting completed, which
means it's an issue with aio_flush in librados when rbd caching isn't
used.

Mike's log is from a qemu without aio_flush(), and with caching turned on, and shows all flushes completing quickly, so it's a separate bug.

Josh,
In addition to the Ceph logs you can also use QEMU tracing with the
following events enabled:
virtio_blk_handle_write
virtio_blk_handle_read
virtio_blk_rw_complete

See docs/tracing.txt for details on usage.

Inspecting the trace output will let you observe the I/O request
submission/completion from the virtio-blk device perspective.  You'll be
able to see whether requests are never being completed in some cases.

Thanks for the info. That may be the best way to check what's happening
when caching is enabled. Mike, could you recompile qemu with tracing
enabled and get a trace of the hang you were seeing, in addition to
the ceph logs?

This bug seems like a corner case or race condition since most requests
seem to complete just fine.  The problem is that eventually the
virtio-blk device becomes unusable when it runs out of descriptors (it
has 128).  And before that limit is reached the guest may become
unusable due to the hung I/O requests.

It seems only one request hung from an important kernel thread in
Oliver's case, but it's good to be aware of the descriptor limit.

Josh



reply via email to

[Prev in Thread] Current Thread [Next in Thread]