On 07/31/2016 10:44 AM, Doug Stewart wrote:
On Sun, Jul 31, 2016 at 11:31 AM, John W. Eaton <address@hidden
<mailto:address@hidden>> wrote:
On 07/31/2016 10:43 AM, Carnë Draug wrote:
So here's a counter proposal. Let's change all xtest into
failing tests
(xtests on statistical tests can set rand seed), and add tests
that we know
are failing from the bug tracker.
To avoid a flood of bug reports about the failing tests that we will
waste a lot of time marking as duplicate and closing, I think we
will also need to tag these tests with a bug report number and make
it VERY obvious that they are known failures. The test summary
should be modified to display the failures as known, and there
should be some indication of what it means to see a known failure.
Maybe we can handle this by modifying the xtest feature so that it
can accept a bug report number or URL.
Showing known/unknown failures would be helpful to me because I look
at the summary to see whether a change I just made introduced any
NEW problems. If there are suddenly many failing tests, I don't
think I'll be able to remember what the "expected" number of failing
tests is, especially if new failing tests are constantly being added
to the test suite. Then it will be more difficult to know whether a
change has screwed something up, or if new failing tests have been
added.
jwe
For this error the test is right close to the limit of precision that we
use.
We can tweak residue to work for this example and then it will fail for
the more common examples.
So -- should we even test examples that are close to the limit of
precision or past the limit of precision?
processing /home/doug/octavec/octave/scripts/polynomial/residue.m
***** xtest
z1 = 7.0372976777e6;
p1 = -3.1415926536e9;
p2 = -4.9964813512e8;
r1 = -(1 + z1/p1)/(1 - p1/p2)/p2/p1;
r2 = -(1 + z1/p2)/(1 - p2/p1)/p2/p1;
r3 = (1 + (p2 + p1)/p2/p1*z1)/p2/p1;
r4 = z1/p2/p1;
r = [r1; r2; r3; r4];
p = [p1; p2; 0; 0];
k = [];
e = [1; 1; 1; 2];
b = [1, z1];
a = [1, -(p1 + p2), p1*p2, 0, 0];
[br, ar] = residue (r, p, k, e);
assert (br, b, 1e-8);
assert (ar, a, 1e-8);
!!!!! known failure
The other aspect of this xtest() is that it suggests an arcane bug when
it might be something more obvious that is incorrect.
The failure in this case doesn't seem to be precision related, at least
not directly. The failure is for dimension mismatch:
octave:23> assert (br, b, 1e-8);
error: ASSERT errors for: assert (br,b,1e-8)
Location | Observed | Expected | Reason
. O(1x1) E(1x2) Dimensions don't match
octave:23> br
br = 7.0373e+06
octave:24> b
b =
1.0000e+00 7.0373e+06
octave:25> assert (ar, a, 1e-8);
error: ASSERT errors for: assert (ar,a,1e-8)
Location | Observed | Expected | Reason
. O(1x4) E(1x5) Dimensions don't match
octave:28> ar
ar =
3.6412e+09 1.5697e+18 0.0000e+00 0.0000e+00
octave:29> a
a =
1.0000e+00 3.6412e+09 1.5697e+18 0.0000e+00 0.0000e+00
Is there an assumed coefficient value of 1 for the zeroeth order of the
polynomial (a common assumption) somewhere along the line that is lost?
Or is that 1 being lost because of some tiny loss of precision somewhere?
I'd point out that the poles in this example are far into the left half
plane and not real near one another so my intuition is that numerical
issues shouldn't be problem.