John Cowan wrote:
Brandon J. Van Every scripsit:
Frighteningly, having examined the testing logic, it is correct. It's
actually reporting on 2 different runs of the 128K sample size. Once
when testing the default, and once when looping through the increasing
sample sizes.
That's exactly what I assumed in the first place, and in fact until
reading this very posting I didn't realize that you thought I meant there
was a bug in your code.
I thought it possible, since I did do a quick last-minute edit without
a lot of checking to do the default timing improvement reportage.
No, it's the whole idea of sampling that is
"stupid" and [ironically] "brilliant". Your code merely rubbed my nose
in that.
And for that, Felix *is* responsible.
"J'accuse!" It is of course open source, which implies open
responsibility. What level of responsibility are you personally
willing to take about this problem? You willing to figure out what's
good or bad about nsample? I'm willing to implement a better
stack-size.cmake script, and I'm doing that tonight. But I'm not
willing to dig into nsample or revamp it. I have OpenGL fish to fry.
Also I looked at nsample briefly, and it looked like an "everything and
the kitchen sink" composite benchmark. It's not a measure of a single
primitive operation or anything straightforward like that. I know how
to make benchmarks correct in general, I have professional experience
ala OpenGL Viewperf and GLPerf (dating myself with the latter). But I
don't have the impetus to make nsample correct specifically. One can
spend an awful lot of time benchmarking and perfecting benchmarks,
rather than writing code that does something important.
Cheers,
Brandon Van Every
|