octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: xtest vs test


From: Daniel J Sebald
Subject: Re: xtest vs test
Date: Sun, 31 Jul 2016 14:36:11 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.7.2

On 07/31/2016 02:09 PM, Daniel J Sebald wrote:
On 07/31/2016 12:22 PM, Daniel J Sebald wrote:
On 07/31/2016 10:44 AM, Doug Stewart wrote:


On Sun, Jul 31, 2016 at 11:31 AM, John W. Eaton <address@hidden
<mailto:address@hidden>> wrote:

    On 07/31/2016 10:43 AM, Carnë Draug wrote:

        So here's a counter proposal.  Let's change all xtest into
        failing tests
        (xtests on statistical tests can set rand seed), and add tests
        that we know
        are failing from the bug tracker.


    To avoid a flood of bug reports about the failing tests that we will
    waste a lot of time marking as duplicate and closing, I think we
    will also need to tag these tests with a bug report number and make
    it VERY obvious that they are known failures.  The test summary
    should be modified to display the failures as known, and there
    should be some indication of what it means to see a known failure.
    Maybe we can handle this by modifying the xtest feature so that it
    can accept a bug report number or URL.

    Showing known/unknown failures would be helpful to me because I look
    at the summary to see whether a change I just made introduced any
    NEW problems.  If there are suddenly many failing tests, I don't
    think I'll be able to remember what the "expected" number of failing
    tests is, especially if new failing tests are constantly being added
    to the test suite.  Then it will be more difficult to know whether a
    change has screwed something up, or if new failing tests have been
    added.

    jwe




For this error the test is right close to the limit of precision that we
use.
We can tweak residue to work for this example and then it will fail for
the more common examples.
So -- should we even test examples that are close to the limit of
precision or past  the limit of precision?


processing /home/doug/octavec/octave/scripts/polynomial/residue.m
***** xtest
  z1 =  7.0372976777e6;
  p1 = -3.1415926536e9;
  p2 = -4.9964813512e8;
  r1 = -(1 + z1/p1)/(1 - p1/p2)/p2/p1;
  r2 = -(1 + z1/p2)/(1 - p2/p1)/p2/p1;
  r3 = (1 + (p2 + p1)/p2/p1*z1)/p2/p1;
  r4 = z1/p2/p1;
  r = [r1; r2; r3; r4];
  p = [p1; p2; 0; 0];
  k = [];
  e = [1; 1; 1; 2];
  b = [1, z1];
  a = [1, -(p1 + p2), p1*p2, 0, 0];
  [br, ar] = residue (r, p, k, e);
  assert (br, b, 1e-8);
  assert (ar, a, 1e-8);
!!!!! known failure

The other aspect of this xtest() is that it suggests an arcane bug when
it might be something more obvious that is incorrect.

The failure in this case doesn't seem to be precision related, at least
not directly.  The failure is for dimension mismatch:

octave:23> assert (br, b, 1e-8);
error: ASSERT errors for:  assert (br,b,1e-8)

   Location  |  Observed  |  Expected  |  Reason
      .          O(1x1)       E(1x2)      Dimensions don't match
octave:23> br
br =    7.0373e+06
octave:24> b
b =

    1.0000e+00   7.0373e+06

octave:25> assert (ar, a, 1e-8);
error: ASSERT errors for:  assert (ar,a,1e-8)

   Location  |  Observed  |  Expected  |  Reason
      .          O(1x4)       E(1x5)      Dimensions don't match
octave:28> ar
ar =

    3.6412e+09   1.5697e+18   0.0000e+00   0.0000e+00

octave:29> a
a =

    1.0000e+00   3.6412e+09   1.5697e+18   0.0000e+00   0.0000e+00


Is there an assumed coefficient value of 1 for the zeroeth order of the
polynomial (a common assumption) somewhere along the line that is lost?
  Or is that 1 being lost because of some tiny loss of precision
somewhere?

I'd point out that the poles in this example are far into the left half
plane and not real near one another so my intuition is that numerical
issues shouldn't be problem.

Actually, I see that there are two poles at zero placed into the input
array, but that's not the issue.

The problem with this example is that there is something inherently bad
about the example.  There is this subroutine rresidue() inside residue.m
that appears to have some kind of limitation with poles that far away
from the real(s) = 0 line.  Given the values in this example for p1 and
p2, the first format of residue gives a matrix warning (and I've printed
out A and B from line 268):

octave:153> [rrr ppp kkk eee] = residue(b,a);
A =

    0.0000e+00   0.0000e+00   0.0000e+00   0.0000e+00
    3.6412e+09   0.0000e+00   3.1416e+09   4.9965e+08
    1.5697e+18   3.6412e+09   0.0000e+00   5.1200e+02
    0.0000e+00   1.5697e+18   0.0000e+00  -1.6085e+12

B =

    0.0000e+00
    0.0000e+00
    1.0000e+00
    7.0373e+06

warning: matrix singular to machine precision
warning: called from
     residue at line 268 column 5

Something is causing that rresidue() subroutine to prepad a zero in each
column--something that doesn't happen if I choose p1 and p2 to be more
reasonable values.

So, I'm wonder if the routine is being given data for which some
operation (deconvolution?) is failing.  I've tried changing tolerance,
but that seems to have no effect.  Should such a test even be run?

I can see where the loss of the 1 at the front of ar and br is coming from. In the rresidue sub-routine is the following test:

  ## Check for leading zeros and trim the polynomial coefficients.
  if (isa (r, "single") || isa (p, "single") || isa (k, "single"))
    small = max ([max(abs(pden)), max(abs(pnum)), 1]) * eps ("single");
  else
    small = max ([max(abs(pden)), max(abs(pnum)), 1]) * eps;
  endif

which is followed by polyreduce, a routine meant to strip leading "zeros". The point of this code must be to get rid of some type of pathological case, I guess. However, for this example, those distant poles lead to a significant value for the variable 'small':

octave:175>  [br, ar] = residue (r, p, k, e);
small =  348.54
pnum =

  -9.6296e-35  -4.2894e-25   1.0000e+00   7.0373e+06

pden =

   1.0000e+00   3.6412e+09   1.5697e+18   0.0000e+00   0.0000e+00

Consequently, that 1.000 at the front of the denominator is getting stripped when it shouldn't. (The two leading pnum values though, should be stripped.) So, that formula for 'small' doesn't implement exactly what it is intended.

Now, if I force small to something that makes pden remain as is, THEN I get the numerical problem:

error: ASSERT errors for:  assert (br,b,1e-8)

  Location  |  Observed  |  Expected  |  Reason
(2) 7037297.6777 7037297.6777 Abs err 9.7789e-08 exceeds tol 1e-08

OK, so in summary, two issues here:

1) The algorithm for computing 'small' of rresidue() doesn't seem as nuanced as it should be.

2) The tolerance for this type of operation may be unnecessarily small because there are singular matrices occurring along the way. Do we want to do a deeper dive on this?

Dan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]