qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [kvm-devel] [Qemu-devel] Re: [PATCH 1/3] Refactor AIO interface to a


From: Jamie Lokier
Subject: Re: [kvm-devel] [Qemu-devel] Re: [PATCH 1/3] Refactor AIO interface to allow other AIO implementations
Date: Tue, 22 Apr 2008 16:23:22 +0100
User-agent: Mutt/1.5.13 (2006-08-11)

Avi Kivity wrote:
> Anthony Liguori wrote:
> >>If I submit sequential O_DIRECT reads with aio_read(), will they enter
> >>the device read queue in the same order, and reach the disk in that
> >>order (allowing for reordering when worthwhile by the elevator)?
> >>  
> >There's no guarantee that any sort of order will be preserved by AIO 
> >requests.  The same is true with writes.  This is what fdsync is for, 
> >to guarantee ordering.
> 
> I believe he'd like a hint to get good scheduling, not a guarantee.  
> With a thread pool if the threads are scheduled out of order, so are 
> your requests.

> If the elevator doesn't plug the queue, the first few requests may
> not be optimally sorted.

That's right.  Then they tend to settle to a good order.  But any
delay in scheduling one of the threads, or a signal received by one of
them, can make it lose order briefly, making the streaming stutter as
the disk performes a few local seeks until it settles to good order
again.

You can mitigate the disruption in various ways.

  1. If all threads share an "offset" variable, and reads and
     increments that atomically just prior to calling pread(), that helps
     especially at the start.  (If threaded I/O is used for QEMU disk
     emulation, I would suggest doing that, in the more general form
     of popping a request from QEMU's internal shared queue at the last
     moment.)

  2. Using more threads helps keep it sustained, at the cost of more
     wasted I/O when there's a cancellation (changed mind), and more
     memory.

However, AIO, in principle (if not implementations...) could be better
at keeping the suggested I/O order than thread, without special tricks.

-- Jamie




reply via email to

[Prev in Thread] Current Thread [Next in Thread]