octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Patch to residue.m


From: Daniel J Sebald
Subject: Re: Patch to residue.m
Date: Sun, 30 Dec 2007 22:04:40 -0600
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.3) Gecko/20041020

Ben Abbott wrote:
Please consider this patch to residue.m which improves its accuracy when pole multiplicity exists.

To capture the improvement, the test scripts were also modified to be more stringent.

Ben,

This looks like not too much of a change.  Still, a few questions.  Why the 
change?  What motivated this?  Just improving your code?  Or did you find an 
example that was failing?  (If so, give us the example.)

Observations:

1) Extra space in "group of pole  multiplicity,"

2)  p_group = cumsum (e == 1);
   for ng = 1:max(p_group)

Since (e == 1) is strictly non-negative, rather than max(p_group) it could be 
p_group(end).  That is more efficient, but of course the polynomial order is 
quite small so it doesn't really matter.

3)  Could you explain the reasoning of the various tolerance parameters in the 
code?  For example, I see this code for determining if a pole is zero or not:

 small = max (abs (p));
 small = max ([small, 1] ) * 1e-12 * (1 + numel (p))^2;
 p(abs (p) < small) = 0;

I'm don't understand what the (1 + numel (p))^2 does.  The more poles there 
are, the larger the tolerance becomes?  Also, might you want to define 1e-12 
with respect to the machine precision, say

 small = max ([small, 1]) * eps*1e4 * (1 + numel (p))^2;

(On my machine,

octave:14> eps
ans =  2.2204e-16

so that eps*1e4 is about 1e-12.)


Why is it important to cast poles to 0?  Is there something later in the code 
that depends on that?

Now, here is the part I'm not so comfortable with.  There is the code above 
that says a pole is zero if it is within ~1e-12 of zero.  But shortly 
thereafter the code does this:

 [e, indx] = mpoles (p, toler, 1);

where poles within "toler" (in the code it is set to 0.001) are equal.  One 
case is a more strict requirement to be declared 0 than the other case of declaring two 
poles equal.  It's just slightly strange.  Doesn't that seem a contradiction somehow?

I'm not sure I'm comfortable with averaging the poles either:

   p(m) = mean (p(m));

From a practical standpoint it may be a good idea, but from a numerical 
standpoint it seems a little unorthodox.  I'd say let the numerical effects 
remain and let the end user decide what to do.  Why is this needed?  I suppose 
you are dealing with finding higher order residues, but could you compute the 
mean value for use in the rest of the routine but leave the values non-averaged 
when returned from the routine?  What is the user expecting?

I guess my main concerns are 1) is there a standard way of dealing with "nearly zero" in Octave?  
Should "toler" be replaced by "small"?

Dan


reply via email to

[Prev in Thread] Current Thread [Next in Thread]