qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 05/26] aio: return "AIO in progress" state from


From: Kevin Wolf
Subject: Re: [Qemu-devel] [PATCH 05/26] aio: return "AIO in progress" state from qemu_aio_wait
Date: Thu, 19 Apr 2012 16:50:44 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:11.0) Gecko/20120329 Thunderbird/11.0.1

Am 12.04.2012 14:00, schrieb Paolo Bonzini:
> The definition of when qemu_aio_flush should loop is much simpler
> than it looks.  It just has to call qemu_aio_wait until it makes
> no progress and all flush callbacks return false.  qemu_aio_wait
> is the logical place to tell the caller about this.
> 
> Signed-off-by: Paolo Bonzini <address@hidden>
> ---
>  aio.c      |   44 ++++++++++++++++++--------------------------
>  qemu-aio.h |    6 ++++--
>  2 files changed, 22 insertions(+), 28 deletions(-)
> 
> diff --git a/aio.c b/aio.c
> index f19b3c6..5fcc0c6 100644
> --- a/aio.c
> +++ b/aio.c
> @@ -99,41 +99,26 @@ int qemu_aio_set_fd_handler(int fd,
>  
>  void qemu_aio_flush(void)
>  {
> -    AioHandler *node;
> -    int ret;
> -
> -    do {
> -        ret = 0;
> -
> -     /*
> -      * If there are pending emulated aio start them now so flush
> -      * will be able to return 1.
> -      */
> -        qemu_aio_wait();
> -
> -        QLIST_FOREACH(node, &aio_handlers, node) {
> -            if (node->io_flush) {
> -                ret |= node->io_flush(node->opaque);
> -            }
> -        }
> -    } while (qemu_bh_poll() || ret > 0);
> +    while (qemu_aio_wait());
>  }
>  
> -void qemu_aio_wait(void)
> +bool qemu_aio_wait(void)
>  {
>      int ret;
>  
>      /*
>       * If there are callbacks left that have been queued, we need to call 
> then.
> -     * Return afterwards to avoid waiting needlessly in select().
> +     * Do not call select in this case, because it is possible that the 
> caller
> +     * does not need a complete flush (as is the case for qemu_aio_wait 
> loops).
>       */
>      if (qemu_bh_poll()) {
> -        return;
> +        return true;
>      }
>  
>      do {
>          AioHandler *node;
>          fd_set rdfds, wrfds;
> +        bool busy;
>          int max_fd = -1;
>  
>          walking_handlers = 1;
> @@ -142,14 +127,18 @@ void qemu_aio_wait(void)
>          FD_ZERO(&wrfds);
>  
>          /* fill fd sets */
> +        busy = false;
>          QLIST_FOREACH(node, &aio_handlers, node) {
>              /* If there aren't pending AIO operations, don't invoke 
> callbacks.
>               * Otherwise, if there are no AIO requests, qemu_aio_wait() would
>               * wait indefinitely.
>               */
> -            if (node->io_flush && node->io_flush(node->opaque) == 0)
> -                continue;
> -
> +            if (node->io_flush) {
> +                if (node->io_flush(node->opaque) == 0) {
> +                    continue;
> +                }
> +                busy = true;
> +            }
>              if (!node->deleted && node->io_read) {
>                  FD_SET(node->fd, &rdfds);
>                  max_fd = MAX(max_fd, node->fd + 1);
> @@ -163,8 +152,9 @@ void qemu_aio_wait(void)
>          walking_handlers = 0;
>  
>          /* No AIO operations?  Get us out of here */
> -        if (max_fd == -1)
> -            break;
> +        if (!busy) {
> +            return false;
> +        }

This can change the behaviour for aio_handlers that don't have a flush
callback. Previously we would run into the select, now we don't.

Hm, okay, such handlers don't exist. Maybe we should assert() it
somewhere, but I guess then it's not a problem with this patch.

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]