freetype-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [ft-devel] source control, defect tracking and unit tests


From: Graham Asher
Subject: RE: [ft-devel] source control, defect tracking and unit tests
Date: Sat, 5 Feb 2005 11:02:39 -0000

David, and fellow FreeTypers,

I have little time so I'll be very brief. David's points about tests are
interesting and well made but I think incorrect. I believe a test suite is
essential, and that the distinction between unit tests and regression tests
is not useful.

My credentials in this matter include much writing and running and updating
of unit tests and other types of tests, to a great extent for other people's
code, for Symbian Ltd, Research In Motion, and other organisations with very
large code bases - much larger than that of FreeType. (This is not meant as
one-upmanship directed at David, who I know works at a high level for a
large organisation, or anybody else; just an attempt to try to convince
people that I, too, know what I am talking about, and have actually been
paid for my opinions.)

Regression tests are simply unit tests that were written in response to a
reported regression. Unit tests may be written either before a unit is
written or after; this is of historic interest only. Like David, I am a
pragmatist (although my pragmatism is informed by idealism) and I believe
that the thing is to build up a test suite, whether by writing tests before
modules, which is now largely impossible, or afterwards.

To me a unit test program is a program that can be run automatically, has
its own well-defined input, and produces objective results It must terminate
with an error if all tests are not passed.

<<<<<<
Most of us balk at the idea of writing unit tests for code that is more
older than a few days
>>>>>>

Most people baulk at writing tests at all. But some less than others. A test
for old code can start off by accepting current behaviour and in effect
asserting that current behaviour is correct. Create sample input, run the
test, check manually that the output is correct, then use the output as the
'expected output data' for future runs of the test.

<<<<<<
just like documentation, it's generally always better to put the unit test
in the same source file(s) than the unit being tested
>>>>>>

I have used that method, but I have always found it easier to make the tests
separate programs. Sometimes I need to add extra 'testing-only' public
functions to expose parts of the mechanism that would otherwise be
inaccessible.

<<<<<<
Otherwise, synchronisation problems always arise much too early
>>>>>>

I have never had these problems. Before checking in each batch of changes,
run all unit tests using the automated test script. Some tests will fail.
Find out why. Either the tests need to be updated (if the design has
changed), or, more usually, there are bugs in the new code. Fix them and
re-run the test script. Repeat as needed.

<<<<<<
the tests must not depend on other units ! otherwise, cascading bugs could
prevent your tests to really detect failures. This is a _major_ point for
unit tests
>>>>>>

This point baffles me. I don't agree with it at all, but I know David is an
experienced expert, so I really don't see why we disagree. I start from the
premise that testing is essential, and that some tests are better than none.
If there is a base library, and then a higher-level library that depends on
it, it is perfectly legitimate, and a practice I have followed for many
years, to write unit tests for both. Even the lowest-level test depends on
the correctness of the compiler and the processor. Obviously you have to run
all the tests when you do your testing; start with the low-level ones.

<<<<<<
The problem is: how do we generate the input data for each test, and how
do we compare the results with what we expect. Remember, we'd better write
the
unit tests in the same source file than the functions we want to test
>>>>>>

We should use public-domain fonts, of which there are many, for the
high-level rasterizer testing. Comparisons can be done easily by rasterizing
into a bitmap and comparing it with an expected bitmap stored in a file as
part of the test suite, or hard-coded into the source.

<<<<<<
You could craft a new font file for each new test, but this is serious work,
and you must be extremely cautious that the binary file you generated isn't
buggy itself in ways that wouldn't be easily checked by your code, and which
wouldn't cause some false positives. Fun for the whole family :-)
>>>>>>

It's hard work but saves work in the long term and often the short term. New
font files *may* be necessary but we can easily start off with public-domain
ones.

<<<<<<
And don't use Knuth/TeX as an example, he designed the fonts from scratch
himself, for god's sake :-)
>>>>>>

It is a good example of good practice. TeX and METAFONT produce
precisely-defined outputs from precisely-defined inputs. FreeType can do the
same.

Lastly, a REAL EXAMPLE. In my cartography project, CartoType, I use an
adapted version of the FreeType mono and grey-scale rasterizers to draw
arbitrary graphics. I test these using a unit test that declares a bitmap as
part of the code, then draws shapes into a new bitmap using the rasterizer,
then checks the old against the new. This has been very valuable and has
detected a small change in the behaviour of FreeType (when I retro-fitted a
FreeType bug fix into my code - I reported this change), which gives me
confidence that it would detect defects.

Best regards,

Graham Asher






reply via email to

[Prev in Thread] Current Thread [Next in Thread]