octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: About diagonal matrices


From: dbateman
Subject: Re: About diagonal matrices
Date: Sun, 1 Mar 2009 10:33:12 -0800 (PST)



Jaroslav Hajek-2 wrote:
> 
> It's just a matter of definition. There's no "mathematical" reasoning
> because it depends on what part of the math you use first.
> Concerning the sparse * matrix multiplication, just about every
> numerical software I know ignores the 0*NaN products as an inherent
> consequence of the underlying algorithm - Matlab, Scilab, R, Maple,
> even libraries like BLAS (with triangular and banded matrices) or the
> good old SPARSKIT. I welcome checks for others.
> I'll be quite surprised if you show me a numerical software that does
> otherwise.
> 
> The scalar * sparse operation is more subtle. doing the additional
> check for NaN and Inf is trivial and hence does not adversely affect
> performance. That's probably why some of them decided to "fix" it.
> Scilab and Maple do what Octave does, and what earlier versions of
> Matlab did. R, on the contrary, fills up the matrix; but it also
> converts it to a full one.
> 
> Basically, if David would only like to "fix" the latter case, then I
> could eventually agree, but I would like an explanation why should it
> be inconsistent with the rest.
> 
> I agree the current behaviour can be confusing in certain cases, but
> the converse is also true. This stuff is just confusing by nature. I
> think we should just document it somewhere and let it in the current
> state unless there's a general demand to do otherwise. I don't think
> Octave should pretend that it always does the One Right Thing. Let the
> users know what is going on behind the scene.
> Most people will just not care - these are corner cases. Unless things
> slow down noticeably - then I think a typical user will want to know
> why sparse * vector multiplication is suddenly slower or why scalar *
> sparse can eat up all memory and hang his computation and why he
> should be happy for that. And most likely, he would also ask for a way
> to avoid it, other than just "insert proper checks in your code".
> 
> Not that insisting that Octave's operations should behave as defined
> sequences of floating-point operations rather than as reasonable
> approximations to their mathematical meaning vastly complicates most
> optimizations.
> Even, for example, in Fortran, a compiler is allowed to replace X*Y +
> X*Z by X*(Y+Z) - which violates NaN consistency (but not vice versa
> because explicit parentheses *must* be honored - that's to give the
> user control of it).
> 
> The NaNs and Infs shouldn't IMHO, be regarded as a new kind of
> arithmetics that redefines the whole math, but as a tool to deal with
> floating-point exception. Their whole point, rather than throwing
> runtime errors, is that an invalid result may actually not be used, or
> even validate itself (as in 1/Inf), allowing your computation to run
> in such cases.
> 

Well I'm finally somewhere I can write an e-mail from easily, though I
haven't had the time to reread the thread. The issue I considered in the
past like this was operations like "speye(n) .^ 0" or "speye(n) ./ 0" where
the 0.^0 and 0./0 terms of the matrix should create a NaN in the resulting
matrix I hadn't considered the "speye(n) OP NaN" but didn't and don't yet
see why it should be different if the NaN is pre-existing rather than
created by the binary operation, otherwise the NaN values won't propagate
and in fact might very likely disappear. You seem to think, and have
convince John that disappearing NaN's are a good thing so I'll try to reread
the thread and respond again later on.

D.

-- 
View this message in context: 
http://www.nabble.com/Re%3A-About-diagonal-matrices-tp22124562p22276151.html
Sent from the Octave - Maintainers mailing list archive at Nabble.com.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]