qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] posix-aio-compat: Fix idle_threads counter


From: Kevin Wolf
Subject: Re: [Qemu-devel] [PATCH] posix-aio-compat: Fix idle_threads counter
Date: Tue, 03 May 2011 18:53:45 +0200
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.15) Gecko/20101027 Fedora/3.0.10-1.fc12 Thunderbird/3.0.10

Am 03.05.2011 17:56, schrieb Stefan Hajnoczi:
> On Tue, May 3, 2011 at 2:26 PM, Kevin Wolf <address@hidden> wrote:
>> A thread should only be counted as idle when it really is waiting for new
>> requests. Without this patch, sometimes too few threads are started as busy
>> threads are counted as idle.
>>
>> Not sure if it makes a difference in practice outside some artificial
>> qemu-io/qemu-img tests, but I think the change makes sense in any case.
>>
>> Signed-off-by: Kevin Wolf <address@hidden>
>> ---
>>  posix-aio-compat.c |    6 ++----
>>  1 files changed, 2 insertions(+), 4 deletions(-)
> 
> I think the critical change here is that idle_threads is not being
> incremented by spawn_thread().  This means that we will keep spawning
> threads as new requests come in and until the first thread goes idle.
> 
> Previously you could imagine a scenario where we spawn a thread but
> don't schedule it yet.  Then we immediately submit more I/O and since
> idle_threads was incremented we don't spawn additional threads to
> handle the requests.
> 
> Are these the cases you were thinking about?

Yes, this is the case that I noticed.

However, I'm not sure if this is really the critical change. In this
case, it would take a bit longer until you get your full 64 threads, but
you should get there eventually and then it shouldn't impact performance
any more.

However, what I saw in my test case (qemu-img always running 16
sequential read requests in parallel) was that I only got 16 threads.
This sounds logical, but in fact you seem to need always one thread more
for good performance (I don't fully understand this yet). And with this
patch, you actually get 17 threads. The difference was like 8s vs. 22s
for the same requests, and using more than 17 threads doesn't further
improve it.

This is what makes me a bit nervous about it, I don't really understand
what's going on here. The observation suggests that the race doesn't
only affect thread creation, but also operations afterwards.

> I like your patch because it reduces the number of places where we
> mess with idle_threads.  I'd move the increment/decrement out of the
> loop since the mutex is held here anyway so no one will test
> idle_threads between loop iterations, but it's up to you if you want
> to send a v2 or not.

Probably doesn't really make a difference.

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]