help-gplusplus
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: large arrays - how to store?


From: Dave Steffen
Subject: Re: large arrays - how to store?
Date: 06 Mar 2005 17:18:00 -0500
User-agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.3

Paul Schneider <paulibaer@uboot.com> writes:

> Guy Harrison wrote:
> > Paul Schneider wrote:
> >
> >>Ulrich Lauther wrote:
> >>
> >>>Guy Harrison <swamp-DEL-dog@ntlworld.com> wrote:
> >>>: Paul Schneider wrote:
> >>>
> >>>: > Hello,
> >>> : > : > for my program I need a large multidimensional array the
> >>> dimensions
> >>>: > of which I know at compile time. Specifically it is of size 500 x
> >>>: > 5000 x 10 x 6. I am looking for guidance of how to treat this array
> >>>: > on a linux system with 2 Gigs of memory.
> >>> : > are all the 500x5000x10x6 memory lactions actually used? Or is
[...]

> Thanks for your answer. My problem is definitely not even remotely
> sparse. It's all dense vector/matrix matrix/matrix stuff.

I presume the problem is that you don't actually have enough memory to
store these beasties?

As has been pointed out, dynamically allocating this memory can
sometimes avoid problems.  Some operating systems have (effectively)
hard constraints on how much static data a process is allowed to have,
but you have access to much more memory allocated dynamically.  This
is worth a shot.

> I don't know much about compilers and operating systems. Maybe this
> is why it seems so unintuitive to me to throw away the advantage? to
> know everything at compile time. In all the experiments I performed,
> static allocation itself and working with statically allocated
> structures was much faster than the dynamic counterpart.

Yeah. But, assuming you wouldn't be asking this question of static
memory was working for you, you may not have any choice.

Keep in mind that there's a difference between the performance hit
when you try to get the memory (which may be considerable) and the
performance hit you take when using it (vs. static memory).
Dynamically allocating gigs of RAM may take a while, but hopefully you
only do it once, and if you're working on matrices of this size, it'll
probably be a small % of your runtime.

All of that having been said...  it sounds like you're talking about
some truly BIG matrices here.  It's more than likely that you'll have
other problems.  What sorts of things are you going to do to them?
Keep in mind that a sufficiently large matrix will have eigenvalues
that (in "real" life) differ from each other by less than machine
precision; the practical effect of which is that the associated
eigenvectors are sort of "randomly" assigned to their eigenvalues, or
that unpleasant (and perhaps not "physically real" degeneracies show
up).

One possibility: most really big matrices are, if not sparse, at least
rather symmetrical.  There are techniques for decomposing operations
on them into operations on sub-matrices; that is, you break your
original problem into chunks and then operate on them.  If this is
possible for your problem, you might be able to get some real space
and time savings out of looking for such a decomposition.  (I've never
done that sort of thing myself, but I have heard of such things.  All
my work in this area was with matrices that were big, but just barely
small enough to deal with directly.)

I suspect that if you're really doing linear algebra with matrices
that push the limits of your hardware, you're going to have to get a
little fancy. :-)

--------------------------------------------------------------------------
Dr. Dave Steffen, Ph.D.         Wave after wave will flow with the tide
Raytheon IIS                      And bury the world as it does
tkd-@physics@comcast.net         Tide after tide will flow and recede
(take out the extra @ to reply)   Leaving life to go on as it was...
                                                - Peart / RUSH
"The reason that our people suffer in this way.... 
is that our ancestors failed to rule wisely".   -General Choi, Hong Hi





reply via email to

[Prev in Thread] Current Thread [Next in Thread]