octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: About diagonal matrices


From: Jaroslav Hajek
Subject: Re: About diagonal matrices
Date: Thu, 26 Feb 2009 09:37:31 +0100

On Thu, Feb 26, 2009 at 8:36 AM, John W. Eaton <address@hidden> wrote:
> On 26-Feb-2009, Jaroslav Hajek wrote:
>
> | You mean that the first one is correct? Because it gives sparse(NaN(3,1)) 
> !!!
> | If yes, then I 100% agree with you, but it's the same situation as
> | with the third one.
> | At least, I don't see any difference.
> |
> | Please, David, before you start modifying anything, I kindly ask you
> | to read the whole conversation and at least try to explain to me what
> | *exactly* you would like to change and why the current behaviour is
> | wrong.
>
> Right, after the recent discussion, I now agree that filling with NaNs
> is probably not a good thing.  As far as I can tell, this only affects
> operations that can generate NaNs (correct me if that is not right)
> and not generating them is probably more useful behavior.  OTOH, we
> should fill with other values (for example, when doing something like
> "speye (3) + 1").
>

It's just a matter of definition. There's no "mathematical" reasoning
because it depends on what part of the math you use first.
Concerning the sparse * matrix multiplication, just about every
numerical software I know ignores the 0*NaN products as an inherent
consequence of the underlying algorithm - Matlab, Scilab, R, Maple,
even libraries like BLAS (with triangular and banded matrices) or the
good old SPARSKIT. I welcome checks for others.
I'll be quite surprised if you show me a numerical software that does otherwise.

The scalar * sparse operation is more subtle. doing the additional
check for NaN and Inf is trivial and hence does not adversely affect
performance. That's probably why some of them decided to "fix" it.
Scilab and Maple do what Octave does, and what earlier versions of
Matlab did. R, on the contrary, fills up the matrix; but it also
converts it to a full one.

Basically, if David would only like to "fix" the latter case, then I
could eventually agree, but I would like an explanation why should it
be inconsistent with the rest.

I agree the current behaviour can be confusing in certain cases, but
the converse is also true. This stuff is just confusing by nature. I
think we should just document it somewhere and let it in the current
state unless there's a general demand to do otherwise. I don't think
Octave should pretend that it always does the One Right Thing. Let the
users know what is going on behind the scene.
Most people will just not care - these are corner cases. Unless things
slow down noticeably - then I think a typical user will want to know
why sparse * vector multiplication is suddenly slower or why scalar *
sparse can eat up all memory and hang his computation and why he
should be happy for that. And most likely, he would also ask for a way
to avoid it, other than just "insert proper checks in your code".

Not that insisting that Octave's operations should behave as defined
sequences of floating-point operations rather than as reasonable
approximations to their mathematical meaning vastly complicates most
optimizations.
Even, for example, in Fortran, a compiler is allowed to replace X*Y +
X*Z by X*(Y+Z) - which violates NaN consistency (but not vice versa
because explicit parentheses *must* be honored - that's to give the
user control of it).

The NaNs and Infs shouldn't IMHO, be regarded as a new kind of
arithmetics that redefines the whole math, but as a tool to deal with
floating-point exception. Their whole point, rather than throwing
runtime errors, is that an invalid result may actually not be used, or
even validate itself (as in 1/Inf), allowing your computation to run
in such cases.

regards

-- 
RNDr. Jaroslav Hajek
computing expert
Aeronautical Research and Test Institute (VZLU)
Prague, Czech Republic
url: www.highegg.matfyz.cz



reply via email to

[Prev in Thread] Current Thread [Next in Thread]