quilt-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Quilt-dev] [patch 3/8] tac is not portable


From: Jean Delvare
Subject: Re: [Quilt-dev] [patch 3/8] tac is not portable
Date: Thu, 15 Sep 2005 20:53:46 +0200

Hi Gary,

Granted we don't care about this specific benchmarking anymore, but the
discussion seems interesting to me still. Benchmarking code changes is
something we should do on a regular basis, so sharing good techniques
sounds like a good idea to me.

> Okay, my bad.  Lets make it 500 1ines then.  (That will actually
> make the difference between the two even smaller btw, as the startup
> time of perl will be less significant as the processing time
> increases).

True, which suggests that we should not have been been testing on the
largest reasonable value but a relatively small one, say 5.

> > Please compare:
> > 
> >   time head -n 500 <largefile> | perl -e 'print reverse <>'
> > 
> > with:
> > 
> >   time head -n 500 <largefile> | gtac
> > 
> > This will be a more valid test.
> 
> Ofcourse this just times 'head', and will be identical for both
> test cases ;-)

How can you be so sure that head will take much more time than tac/perl?
I see no evidence. My tests show that tac takes more time:

$ time for i in `seq 1 100` ; do head -n 500 /boot/System.map-2.6.14-rc1 > 
/dev/null ; done

real    0m0.268s
user    0m0.116s
sys     0m0.152s

$ time for i in `seq 1 100` ; do head -n 500 /boot/System.map-2.6.14-rc1 | tac 
> /dev/null ; done

real    0m0.629s
user    0m0.264s
sys     0m0.364s

This is 0.361s for tac.

That being said, I fully agree that bringing head into the picture is
not needed and should be avoided. Anyway, it's also possible to time the
common part of the commands we try to compare, and substract the result
to both candidates' times.

> That's why I introduced the 'sh -c', both the
> sh and cat startup times should be constant, so the timings will
> then be proportional to the variable part of the pipeline (perl
> vs tac).

That doesn't make sense to me. It's exactly the same as "head" above,
you end up measuring the sum of a constant part and a variable part, so
there is no way the total times are proportional to the variable part
only.

> Actually, a better test would've been:
> 
> $ wc -l input
>        500 input
> $ time perl -e 'print reverse <>' < input
> ...
> real    0m0.015s
> user    0m0.004s
> sys     0m0.009s
> $ time gtac < input
> ...
> real    0m0.021s
> user    0m0.005s
> sys     0m0.014s

Agreed. I'd add that redirecting the output to /dev/null would be even
better, so that you don't accidentally end up timing your term scrolling
speed ;)

> Interestingly... perl is faster! :-D  Try it yourself. (Don't forget
> to run each one several times until you get three or four similar
> results).

My results are different:

$ time for i in `seq 1 100` ; do tac < 500 > /dev/null ; done

real    0m0.285s
user    0m0.148s
sys     0m0.136s

$ time for i in `seq 1 100` ; do perl -e 'print reverse <>' < 500 > /dev/null ; 
done

real    0m0.691s
user    0m0.396s
sys     0m0.292s

Which only proves one thing: benchmarking on a single system has very
little value.

Thanks,
-- 
Jean Delvare




reply via email to

[Prev in Thread] Current Thread [Next in Thread]