lmi
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lmi] Sporadic error in 'timer_test'


From: Greg Chicares
Subject: Re: [lmi] Sporadic error in 'timer_test'
Date: Wed, 18 May 2016 01:21:00 +0000
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Icedove/38.6.0

On 2015-11-11 17:29, Vadim Zeitlin wrote:
> On Wed, 11 Nov 2015 16:08:51 +0000 Greg Chicares <address@hidden> wrote:
[...with diagnostic code added to TimerTest::TestResolution()...]
> 
> GC> Now I see, e.g.:
> GC> 
> GC> 1000 CLOCKS_PER_SEC
> GC> 0.015 clock_resolution
> GC> 1.00005 observed
> GC> 1 interval
> GC> 4.94609e-05 relative_error
[...]
> GC> As a last resort, I looked at the code, but it uses the C RTL, which
> GC> I don't remember well. Now I'm staring at this and wondering how it
> GC> can work:
> 
>  Yes, this is my main question as well, although for different reasons.
> The fundamental problem I see is that clock() doesn't measure wall time at
> all but rather CPU time which can be completely different, so I don't think
> it makes sense to compare them at all.

I'm looking at this again. I still have
  1000 CLOCKS_PER_SEC
on msw, and I've confirmed that 1000 ticks have elapsed, but the
timer says that only 0.0410957 seconds have elapsed, instead of
the expected one second, so the relative error is 0.958904 .
All I can guess is that this is actually measuring how long it
takes all my virtual CPUs to do a second of work. Increasing the
tolerance isn't the answer when values that are supposed to be
equal are more than an order of magnitude different. This test
is just invalid, and it has no value, so I'll just remove it.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]