qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH v4 8/8] linux-aio: share one LinuxAioState withi


From: Stefan Hajnoczi
Subject: Re: [Qemu-block] [PATCH v4 8/8] linux-aio: share one LinuxAioState within an AioContext
Date: Wed, 11 May 2016 14:18:06 +0100
User-agent: Mutt/1.6.0 (2016-04-01)

On Tue, May 10, 2016 at 11:40:33AM +0200, Kevin Wolf wrote:
> Am 10.05.2016 um 11:30 hat Stefan Hajnoczi geschrieben:
> > On Mon, May 09, 2016 at 06:31:44PM +0200, Paolo Bonzini wrote:
> > > On 19/04/2016 11:09, Stefan Hajnoczi wrote:
> > > >> > This has better performance because it executes fewer system calls
> > > >> > and does not use a bottom half per disk.
> > > > Each aio_context_t is initialized for 128 in-flight requests in
> > > > laio_init().
> > > > 
> > > > Will it be possible to hit the limit now that all drives share the same
> > > > aio_context_t?
> > > 
> > > It was also possible before, because the virtqueue can be bigger than
> > > 128 items; that's why there is logic to submit I/O requests after an
> > > io_get_events.  As usual when the answer seems trivial, am I
> > > misunderstanding your question?
> > 
> > I'm concerned about a performance regression rather than correctness.
> > 
> > But looking at linux-aio.c there *is* a correctness problem:
> > 
> >   static void ioq_submit(struct qemu_laio_state *s)
> >   {
> >       int ret, len;
> >       struct qemu_laiocb *aiocb;
> >       struct iocb *iocbs[MAX_QUEUED_IO];
> >       QSIMPLEQ_HEAD(, qemu_laiocb) completed;
> > 
> >       do {
> >           len = 0;
> >           QSIMPLEQ_FOREACH(aiocb, &s->io_q.pending, next) {
> >               iocbs[len++] = &aiocb->iocb;
> >               if (len == MAX_QUEUED_IO) {
> >                   break;
> >               }
> >           }
> > 
> >           ret = io_submit(s->ctx, len, iocbs);
> >           if (ret == -EAGAIN) {
> >               break;
> >           }
> >           if (ret < 0) {
> >               abort();
> >           }
> > 
> >           s->io_q.n -= ret;
> >           aiocb = container_of(iocbs[ret - 1], struct qemu_laiocb, iocb);
> >           QSIMPLEQ_SPLIT_AFTER(&s->io_q.pending, aiocb, next, &completed);
> >       } while (ret == len && !QSIMPLEQ_EMPTY(&s->io_q.pending));
> >       s->io_q.blocked = (s->io_q.n > 0);
> >   }
> > 
> > io_submit() may have submitted some of the requests when -EAGAIN is
> > returned.  QEMU gets no indication of which requests were submitted.
> 
> My understanding (which is based on the manpage rather than code) is
> that -EAGAIN is only returned if no request could be submitted. In other
> cases, the number of submitted requests is returned (similar to how
> short reads work).

I misread the code:

  /*
   * AKPM: should this return a partial result if some of the IOs were
   * successfully submitted?
   */
  for (i=0; i<nr; i++) {
          struct iocb __user *user_iocb;
          struct iocb tmp;

          if (unlikely(__get_user(user_iocb, iocbpp + i))) {
                  ret = -EFAULT;
                  break;
          }

          if (unlikely(copy_from_user(&tmp, user_iocb, sizeof(tmp)))) {
                  ret = -EFAULT;
                  break;
          }

          ret = io_submit_one(ctx, user_iocb, &tmp, compat);
          if (ret)
                  break;
  }
  blk_finish_plug(&plug);

  percpu_ref_put(&ctx->users);
  return i ? i : ret;

You are right that it will return the number of submitted requests (and
no errno) if a failure occurs partway through.

So the "bug" I found does not exist.

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]