qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] timer: fix qemu_poll_ns early timeout on window


From: Stanislav Vorobiov
Subject: Re: [Qemu-devel] [PATCH] timer: fix qemu_poll_ns early timeout on windows
Date: Fri, 18 Apr 2014 13:26:36 +0400
User-agent: Mozilla/5.0 (X11; Linux i686; rv:24.0) Gecko/20100101 Thunderbird/24.4.0

Hi,

Please see below

On 04/18/2014 12:29 PM, Alex Bligh wrote:
> 
> On 18 Apr 2014, at 03:11, Sangho Park wrote:
> 
>>>
>>> g_poll has a problem on windows when using timeouts < 10ms, in 
>>> glib/gpoll.c:
>>>
>>> /* If not, and we have a significant timeout, poll again with
>>> * timeout then. Note that this will return indication for only
>>> * one event, or only for messages. We ignore timeouts less than
>>> * ten milliseconds as they are mostly pointless on Windows, the
>>> * MsgWaitForMultipleObjectsEx() call will timeout right away
>>> * anyway.
>>> */
>>> if (retval == 0 && (timeout == INFINITE || timeout >= 10))
>>>  retval = poll_rest (poll_msgs, handles, nhandles, fds, nfds, 
>>> timeout);
>>>
>>> so whenever g_poll is called with timeout < 10ms it does a quick poll 
>>> instead of wait, this causes significant performance degradation of 
>>> qemu, thus we should use WaitForMultipleObjectsEx directly
> 
> As the author of the original changes, I don't know enough about Windows
> to know whether this change is sensible, but will happily admit my
> Windows coding was done completely blind (as I said at the time)
> and required more third party testing, so it would be unsurprising
> if there were issues.
> 
> On the general issue, the problem you will face if g_poll waits LESS
> than the amount of time is that you'll get a busy wait for a while.
> For instance, if a timeout is scheduled 9ms ahead, and g_poll (or
> replacement) waits 0ms, then the system will continuously call g_poll,
> check whether a timer has expired, call g_poll, check whether a timer
> has expired and so on. During this period, no actual work will get
> done.
Yes, this is exactly what we've experienced, CPU was hogged and overall
performance of guest dropped.

> 
> Therefore, if the underlying poll has lower resolution than 1 ns (and
> does not wait for, at a minimum) the value specified, then it should
> round up.
> 
> On Windows (or more accurately in the absence of ppoll), qemu uses gpoll
> and converts the timeout using qemu_timeout_ns_to_ms. What this does
> is returns 0 for a non-blocking timeout of 0, but otherwise rounds
> UP to the nearest millisecond. Code pasted below:
> 
> int qemu_timeout_ns_to_ms(int64_t ns)
> {
>     int64_t ms;
>     if (ns < 0) {
>         return -1;
>     }
> 
>     if (!ns) {
>         return 0;
>     }
> 
>     /* Always round up, because it's better to wait too long than to wait too
>      * little and effectively busy-wait
>      */
>     ms = (ns + SCALE_MS - 1) / SCALE_MS;
> 
>     /* To avoid overflow problems, limit this to 2^31, i.e. approx 25 days */
>     if (ms > (int64_t) INT32_MAX) {
>         ms = INT32_MAX;
>     }
> 
>     return (int) ms;
> }
> 
> 
> I suspect the easiest fix (certainly for debug purposes) would be to make all
> polls for non-zero time at least 10ms long. IE after the assignment of 'ms',
> do something like
> 
> #ifdef BROKEN_POLL
>     if (ms < 10) {
>         ms = 10;
> #endif
>     }
> 
> If you're actually saying g_poll has 10 millisecond resolution, or rounds 
> down,
> then the appropriate course of action would be to change the rounding line to
> 
>     ms = (ns + 10*SCALE_MS - 1) / (10*SCALE_MS);
> 
> where g_poll is broken like this.
> 
> As I say, I have no Windows box to test this on, but I hope that's helpful.
Yes, it's possible to work around like this, but if we look at this:

     if (ms < 10) {
         ms = 10;
     }

the question arises: where did 10 come from ? It looks like a magic number and 
in fact
it is, it was taken from glib's gpoll.c. But what if tomorrow glib changes and 
use, say,
15 in gpoll.c, then we'll observe the same CPU hogging in qemu again.

> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]