qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] main-loop: Don't lock starve io-threads when ma


From: Alex Bligh
Subject: Re: [Qemu-devel] [PATCH] main-loop: Don't lock starve io-threads when main_loop_tlg has pending events
Date: Tue, 8 Oct 2013 21:32:18 +0100

Hans,

On 8 Oct 2013, at 21:16, Hans de Goede wrote:

> No, it is calling main_loop_wait with nonblocking set to 0, so
> normally the lock would get released. But
> timerlistgroup_deadline_ns(&main_loop_tlg) is returning 0,
> causing timeout_ns to be 0, and this causes the lock to not get
> released.

OK so my question is *why* is timerlistgroup_deadline_ns
returning 0 every time? That should only ever happen if there is an
expired timer every time timerlistgroup_deadline_ns is called.
Given the spin warning only prints (or is only meant to print)
if this happens 1,000 times consecutively, this implies to me
that something is setting a timer wrongly, as timers should
not constantly expire.

I would agree that this changeset may have introduced the symptom
(because before we didn't generate a zero timeout this way, we
broke out of the select loop), but I'd love to know WHY there
is an expired timereach time and what that timer is. Perhaps
something is using a timeout value in milliseconds not realising it
is nanoseconds (or similar).

My worry is that setting 1ns may be just hiding a bug here.

We could relatively easily store __FILE__ and __LINE__ when
timers are created to make it easier to track this sort of thing
(perhaps protected with an ifdef).

> I'm quite sure this is what is happing because once my
> bisect pointed to the "aio / timers: Convert mainloop to use timeout"
> commit as a culprit, I read that commit very carefully multiple
> times and that seemed like the only problem it could cause,
> so I added a debug printf to test for that case, and it triggered.
> 
> What I believe is happening in my troublesome scenario is that one
> thread is calling main_loop_wait(0) repeatedly, waiting for another
> thread to do some work (*), but that other thread is not getting a
> chance to do that work because the iothread never gets unlocked.

That's fine, but that doesn't explain why timerlistgroup_deadline_ns
thinks that a timer has always expired.

> *) likely the spice-server thread which does a lot of work for
> the qxl device
> 
> 
>> 
>> The comment at line 208 suggests that "the I/O thread is very busy
>> or we are incorrectly busy waiting in the I/O thread". Do we know
>> which is happening? Perhaps rather than give up the io_thread
>> mutex on every call (which is in practice what a 1 nanosecond
>> timeout does) we should give it up if we have not released
>> it for X nanoseconds (maybe X=250us), or on every Y calls. I think
>> someone other than me should consider the effect of dropping and
>> reacquiring a mutex so frequently under heavy I/O load, but I'm not
>> sure it's a great idea.
> 
> We're only waiting so short if there are timers which want to run
> immediately, normally we would wait a lot longer.
> 
>> So on reflection you might be more right with 1 nanosecond than
>> 250us as a timeout of 250us, but I wonder whether a strategy
>> of just dropping the lock occasionally (and still using a zero
>> timeout) might be better.
> 
> Paolo probably has some better insights on this, but he seems to
> have called it a day for today, and I'm going to do the same :)
> 
> So lets continue this tomorrow.

... the trouble with reading email from top to bottom :-)

> 
> Regards,
> 
> Hans
> 
> 
> 

-- 
Alex Bligh







reply via email to

[Prev in Thread] Current Thread [Next in Thread]