qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Question on aio_poll


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] Question on aio_poll
Date: Tue, 23 Jul 2013 14:18:25 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

On Sat, Jul 20, 2013 at 02:14:43PM +0100, Alex Bligh wrote:
> As part of adding timer operations to aio_poll and a clock to AioContext, I
> am trying to figure out a couple of points on how aio_poll works. This
> primarily revolves around the flag busy.
> 
> Firstly, at the end of aio_poll, it has
>     assert(progress || busy);
> 
> This assert in fact checks nothing, because before the poll (20 lines or
> so higher), it has
>     if (!busy) {
>         return progress;
>     }
> 
> and as 'busy' is not altered, busy must be true at the assert.
> 
> Is this in fact meant to be checking that we have made progress if aio_poll
> is called blocking? IE assert(progress || !blocking) ?
> 
> Secondly, the tests I am writing for the timer fail because g_poll is
> not called, because busy is false. This is because I haven't got an
> fd set up. But in the instance where aio_poll is called with blocking=true
> and there are no fd's to wait on, surely aio_poll should either hang
> indefinitely or (perhaps better) assert, rather than return immediately
> which is a recipe for an unexpected busywait? If I have timers, it should
> be running those timers rather than returning.

FWIW this goes away in my series that gets rid of .io_flush():

http://lists.nongnu.org/archive/html/qemu-devel/2013-07/msg00092.html

Unfortunately there is an issue with the series which I haven't had time
to look into yet.  I don't remember the details but I think make check
is failing.

The current qemu.git/master code is doing the "correct" thing though.
Callers of aio_poll() are using it to complete any pending I/O requests
and process BHs.  If there is no work left, we do not want to block
indefinitely.  Instead we want to return.

> Thirdly, I don't quite understand how/why busy is being set. It seems
> to be set if the flush callback returns non-zero. That would imply (I think)
> the fd handler has something to write. But what if it is just interested
> in any data to read that is available (and never writes)? If this is the
> only fd aio_poll has, it would appear it never polls.

The point of .io_flush() is to select file descriptors that are awaiting
I/O (either direction).  For example, consider an iSCSI TCP socket with
no I/O requests pending.  In that case .io_flush() returns 0 and we will
not block in aio_poll().  But if there is an iSCSI request pending, then
.io_flush() will return 1 and we'll wait for the iSCSI response to be
received.

The effect of .io_flush() is that aio_poll() will return false if there
is no I/O pending.

It turned out that this behavior could be implemented at the block layer
instead of using the .io_flush() interface at the AioContext layer.  The
patch series I linked to above modifies the code so AioContext can
eliminate the .io_flush() concept.

> When adapting this for timers, I think it should be returning true only
> if a an AIO dispatch did something, or a BH was executed, or a timer ran.
> Specifically if the poll simply times out, it should not be returning
> true unless a timer ran. Correct?

Yes, the return value is about making progress.  If there is still work
pending then it must return true.  If there is no work pending it must
return false.

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]