octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: About diagonal matrices


From: Daniel J Sebald
Subject: Re: About diagonal matrices
Date: Mon, 02 Mar 2009 02:43:57 -0600
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.3) Gecko/20041020

Jaroslav Hajek wrote:

Yes. That's the simple case. Now please describe how to multiply two
sparse matrices.
Or, to get you something simple, how you multiply a full * sparse matrix.

These are all judgements and why I said "no matter how one defines", but I'd
probably rule out the mixture of the two and think of it as either full is
converted to sparse or vice versa.  Say S is a sparse matrix and F is a full
matrix.  I'm gathering that you feel

sparse(F) * S

and

F * full(S)

don't necessarily yield the same result.  I can see that being useful.



If they don't, then probably the whole effort is just pointless - what
problem does it solve?.

Guess I'm not following anymore either.  Didn't the thread start with John 
noticing the same matrix represented as sparse or full produced two different 
results.  The you argued they should be left as is?


But let's say it's OK. I was just asking how
do you imagine the implementation of sparse (with your general default
sparse value) * full matrix product.

The full matrix is first converted to sparse first, as I explained last time.


Or sparse * sparse, with possibly two different default values.

The sparse value for matrix M1 is, say, 0.  The sparse value for matrix M2 is Inf.  The 
sparse value for M1.*M2 is 0 * Inf = NaN, just as with regular octave math; the sparse 
set is S1 & S2 where & is intersection.  For M1*M2 the sparse values don't 
contribute to the inner product sums, similar to the current behavior with zeros.


Try to
lay it down on a paper, I think you'll see it's really not that easy.
Especially if you want to catch all the possible cases with NaNs and
Infs.

I'm trying to resolve what seemed like a conflict with discussion about NaNs 
"disappearing", as David described it.


Yes, in this simple case, it looks nice, but it really gets more
complicated when you dive in. Besides, none of our external software
(UMFPACK) can work with such matrices.

It's more about presenting the results to the user than a complete rewrite of 
the software.  The underlying set of NNZ (NNS) indeces would remain the same.


The value of the sparse matrix is when it comes time to use it in
operations
where a full matrix would consume the CPU.  So it does make sense to
keep
track of the sparse set.



Huh? I don't understand.

What is the purpose of sparse sets?  Represent large matrices that are
mostly zero (or some default value, I argue).  Also, when solving matrix
operations or systems of equations that are sparse, much of the
computations
can be discarded.



OK, that's the general idea of sparse matrices. How does it relate to
our specific problem?

Just trying to agree with you, i.e., we should keep track of the sparse
elements of the matrix.


But we do. Or, to prevent more confusion, what are "sparse elements"?

If it is a big matrix with a bunch of zeros; the zeros are the sparse elements 
of the matrix.  If said matrix is divided by scalar 0, the bunch of zeros turn 
into a bunch of NaNs and now the NaNs are the sparse elements.

The original comment was in reply to David and I was re-iterating your point 
about not wanting the matrix memory to blow up by converting from sparse to 
full.  Rather than convert to full matrix, keep track of the index set (i.e., 
the set of indeces where the matrix elements are NNZ, or what I suggested 
should be NNS).  We're not all arguing against all the points you've made.

Dan


reply via email to

[Prev in Thread] Current Thread [Next in Thread]