freetype-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: -warmup


From: Werner LEMBERG
Subject: Re: -warmup
Date: Mon, 07 Aug 2023 12:57:19 +0000 (UTC)

>> What exactly means 'Baseline (ms)'? Is the shown number the time
>>  for one loop? For all loops together? Please clarify and mention
>>  this on the HTML page.
>
> Clarified that the times are milliseconds for the cumulative time
> for all iterations.

Thanks.  The sentence is not easily comprehensible.  Perhaps change it
to something like

```
Cumulative time for all iterations.  Smaller values means better.
```

BTW, in column 'N' I see stuff like '68160 | 65880'.  What does this
mean?  Please add an explanatory comment to the HTML page.

Another thing: Please mention on the HTML page the completion time for
each test, and the total execution time of all tests together.

>> Looking at the 'Load_Advances (Unscaled)' row, I think that 100%
>>  difference between 0.001 and 0.002 doesn't make any sense. How do
>>  you compute the percentage? Is this based on the cumulative time
>> of  all loops? If so, and you really get such small numbers, there
>> must  be some fine-tuning for high-speed tests (for example,
>> increasing N  for this particular test by a factor of 10, say) to
>> get meaningful  timing values.
>
> it was cumulative time in milliseconds but converted it microseconds
> as how it was and it seem got better.

We are getting nearer, again :-)

What worries me, though, is that we still have such enormous
differences.  For `Get_Char_Index` I think it's lack of precision.
Please try to fix this – if the ratio

   cumulative_time / N

is smaller than a given threshold, N must be increased a lot.  In
other words, for `Roboto_subset.ttf`, N should be set to, say, 10*N.

For the other large differences I think we need some statistical
analysis to get better results – simple cumulation is not good enough.
In particular, outliers should be removed (at least this is my
hypothesis).  Maybe you can look up the internet to find some simple
code to handle them.

An idea to identify outliers could be to split the cumulation time
into, say, 100 smaller intervals.  You can the discard the too-large
values and compute the mean of the remaining data.  My reasoning is
that other CPU activity happens in parallel, but only for short
amounts of time.

Have you actually done a statistical analysis of, say, 'Load_Advances
(Normal)' for `Arial_subset.ttf`?  For example, printing all timings
of the datapoints as histograms for runs A and B?  *Are* there
outliers?  Maybe there is another statistical mean value that gives
more meaningful results.


    Werner

reply via email to

[Prev in Thread] Current Thread [Next in Thread]