octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: xtest vs test


From: Daniel J Sebald
Subject: Re: xtest vs test
Date: Sun, 31 Jul 2016 14:09:42 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.7.2

On 07/31/2016 12:22 PM, Daniel J Sebald wrote:
On 07/31/2016 10:44 AM, Doug Stewart wrote:


On Sun, Jul 31, 2016 at 11:31 AM, John W. Eaton <address@hidden
<mailto:address@hidden>> wrote:

    On 07/31/2016 10:43 AM, Carnë Draug wrote:

        So here's a counter proposal.  Let's change all xtest into
        failing tests
        (xtests on statistical tests can set rand seed), and add tests
        that we know
        are failing from the bug tracker.


    To avoid a flood of bug reports about the failing tests that we will
    waste a lot of time marking as duplicate and closing, I think we
    will also need to tag these tests with a bug report number and make
    it VERY obvious that they are known failures.  The test summary
    should be modified to display the failures as known, and there
    should be some indication of what it means to see a known failure.
    Maybe we can handle this by modifying the xtest feature so that it
    can accept a bug report number or URL.

    Showing known/unknown failures would be helpful to me because I look
    at the summary to see whether a change I just made introduced any
    NEW problems.  If there are suddenly many failing tests, I don't
    think I'll be able to remember what the "expected" number of failing
    tests is, especially if new failing tests are constantly being added
    to the test suite.  Then it will be more difficult to know whether a
    change has screwed something up, or if new failing tests have been
    added.

    jwe




For this error the test is right close to the limit of precision that we
use.
We can tweak residue to work for this example and then it will fail for
the more common examples.
So -- should we even test examples that are close to the limit of
precision or past  the limit of precision?


processing /home/doug/octavec/octave/scripts/polynomial/residue.m
***** xtest
  z1 =  7.0372976777e6;
  p1 = -3.1415926536e9;
  p2 = -4.9964813512e8;
  r1 = -(1 + z1/p1)/(1 - p1/p2)/p2/p1;
  r2 = -(1 + z1/p2)/(1 - p2/p1)/p2/p1;
  r3 = (1 + (p2 + p1)/p2/p1*z1)/p2/p1;
  r4 = z1/p2/p1;
  r = [r1; r2; r3; r4];
  p = [p1; p2; 0; 0];
  k = [];
  e = [1; 1; 1; 2];
  b = [1, z1];
  a = [1, -(p1 + p2), p1*p2, 0, 0];
  [br, ar] = residue (r, p, k, e);
  assert (br, b, 1e-8);
  assert (ar, a, 1e-8);
!!!!! known failure

The other aspect of this xtest() is that it suggests an arcane bug when
it might be something more obvious that is incorrect.

The failure in this case doesn't seem to be precision related, at least
not directly.  The failure is for dimension mismatch:

octave:23> assert (br, b, 1e-8);
error: ASSERT errors for:  assert (br,b,1e-8)

   Location  |  Observed  |  Expected  |  Reason
      .          O(1x1)       E(1x2)      Dimensions don't match
octave:23> br
br =    7.0373e+06
octave:24> b
b =

    1.0000e+00   7.0373e+06

octave:25> assert (ar, a, 1e-8);
error: ASSERT errors for:  assert (ar,a,1e-8)

   Location  |  Observed  |  Expected  |  Reason
      .          O(1x4)       E(1x5)      Dimensions don't match
octave:28> ar
ar =

    3.6412e+09   1.5697e+18   0.0000e+00   0.0000e+00

octave:29> a
a =

    1.0000e+00   3.6412e+09   1.5697e+18   0.0000e+00   0.0000e+00


Is there an assumed coefficient value of 1 for the zeroeth order of the
polynomial (a common assumption) somewhere along the line that is lost?
  Or is that 1 being lost because of some tiny loss of precision somewhere?

I'd point out that the poles in this example are far into the left half
plane and not real near one another so my intuition is that numerical
issues shouldn't be problem.

Actually, I see that there are two poles at zero placed into the input array, but that's not the issue.

The problem with this example is that there is something inherently bad about the example. There is this subroutine rresidue() inside residue.m that appears to have some kind of limitation with poles that far away from the real(s) = 0 line. Given the values in this example for p1 and p2, the first format of residue gives a matrix warning (and I've printed out A and B from line 268):

octave:153> [rrr ppp kkk eee] = residue(b,a);
A =

   0.0000e+00   0.0000e+00   0.0000e+00   0.0000e+00
   3.6412e+09   0.0000e+00   3.1416e+09   4.9965e+08
   1.5697e+18   3.6412e+09   0.0000e+00   5.1200e+02
   0.0000e+00   1.5697e+18   0.0000e+00  -1.6085e+12

B =

   0.0000e+00
   0.0000e+00
   1.0000e+00
   7.0373e+06

warning: matrix singular to machine precision
warning: called from
    residue at line 268 column 5

Something is causing that rresidue() subroutine to prepad a zero in each column--something that doesn't happen if I choose p1 and p2 to be more reasonable values.

So, I'm wonder if the routine is being given data for which some operation (deconvolution?) is failing. I've tried changing tolerance, but that seems to have no effect. Should such a test even be run?

Dan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]