help-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: How to test if the current line contains only white-space?


From: Emanuel Berg
Subject: Re: How to test if the current line contains only white-space?
Date: Tue, 24 Nov 2015 04:01:29 +0100
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.4 (gnu/linux)

Rolf Ade <rolf@pointsman.de> writes:

>> That doesn't sound like a good feeling and to track
>> such influences is not a straightforward task.
>> But this concern never bothered me and I can't
>> recall ever being punished for not being concerned,
>> either, so if you are lucky (?) you are not in any
>> trouble, you are just worried because potentially
>> there could be trouble.
>
> So you cultivates a style of: invent something
> useful and release it, as fas as it works for you.
> If it's useful, your user will put your noise on the
> not taken into account influences, anyway? (Not
> necessary a bad style, btw.)

The best thing is to write software that is used.
If sometimes you are the sole user, it is still used.
With the editor, bugs will be found and until then
there is no real harm to it as this isn't something
used in a space shuttle or anything like that.

If you, as a matter of principle want to be serious
about finding bugs, I don't have a snappy answer how
to root out the issues you describe.

But, here is an article I wrote some years ago on
testing [1] - you might find it useful, but there is
no mention of "influences". If you write such
a paragraph, be sure to mail it to me and I'll include
it :)

Testing

Testing is often put forward as a way to find bugs at
an early stage. It requires little effort and may pay
off huge: not having to retract shipped copies, or
publish patches, and so on.

This holds when testing is compared to
Formal Verification, the more scientific approach
(rather than engineering). With verification, a vast
analytic effort produces a result that can be hard to
grasp. By contrast, testing reveals bugs that can be
fixed instantly, upon detection. Also, testing tests
the real thing. Verification requires a model which
itself may be wrong. Upon success, that only proves
the model to be correct, not the application itself.

The key aspect to testing is to actually do it.
Already at that point, there is a huge advantage
compared to not testing at all. Beyond that, it is
uncertain that more refined methods produce
better results.

Consider the volume factor: if a simple test
method can be employed massively, it is probably
preferable to a more refined method, that only covers
patches of the test field.

If a plethora of test methods are employed, each test
should have an explicit purpose, and/or a distinct
scope. In practice, make a list, and have all tests
automatically and sequentially invoked by a script or
shell function. And don't forget the README file!

For example, one test could enforce that every line of
source code is executed at least once. Most likely
this will require several invocations. To achieve
this, there are again benefits in modular code: each
function should be called at least once, and the
return value fetched and examined. Each procedure
should be invoked and brought to conclusion. And each
interface should be covered in full, including
optional parameters.

Beneath that, it gets more fine-grained, as the
control logic - iteration and branching - must be
taken into account. To help humans visualize the
execution, a directed, cyclic graph could illustrate
the execution logic and flow.

The test that every line of code can execute sensibly
is intended to track bugs that are not syntactical, so
thus will compile, but, once executed, will either
bring a halt to the execution or worse, further down
the road produce a bogus result.

Another way to test focuses on input data. This method
is tangential to the notion of a piece of software as
a black box which maps inputs to outputs. Automatic,
brute force testing with random input data should not
be shunned at. Input data must be valid, but must not
necessarily make sense: with volume, in time, what
makes sense will be tested as well.

If a more refined approach is desired, testing could
be based on input cases, that are qualitatively
distinct. Setup these manually if need be.
For example, with the data of a student database, such
cases could be the empty set (= zero students),
a singe student, all students, only female students,
and so on. Cases that might strike you as unrealistic
or even impossible should not be avoided, as long as
they are valid - on the contrary, those border cases
can reveal shortcomings that sensible inputs cannot.
Indeed, the purpose of testing is to break the
examined application, thus revealing the bug that made
it happen.

(Article from 2013. Minor revision 2015.)

[1] http://user.it.uu.se/~embe8573/testing.txt

-- 
underground experts united
http://user.it.uu.se/~embe8573




reply via email to

[Prev in Thread] Current Thread [Next in Thread]