octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Changing octave to exploit multi-core hardware


From: Quentin Spencer
Subject: Re: Changing octave to exploit multi-core hardware
Date: Tue, 25 Mar 2008 09:34:17 -0500
User-agent: Thunderbird 2.0.0.12 (Windows/20080213)

Jaroslav Hajek wrote:
On Tue, Mar 25, 2008 at 1:14 PM, Aaron Birenboim
<address@hidden> wrote:
John W. Eaton wrote:
 > On 24-Mar-2008, Leonardo Ecco wrote:
 >
 > | have you guys considered modifying the gnu octave code in order to take
 > | advantage of multi-core hardware?
 >
 > In what way?
 >
 > | I'm a graduate student in computer
 > | science, and i'm currently taking a course on multi-core systems - our 
final
 > | project is to select an open source software and change it to exploit
 > | parallelism. I'd like to know if there is someone already working on this
 > | issue. If not, i plan to start next week.
 >
 > At what level do you want to do parallel operations?  Matrix
 > operations?  Loops in the scripting language?  Something else?
 >
 I'm not totally familiar with which matrix libraries have parallelism,
 but many do.
 Matlab seems to license some sort of Intel library.
 I'd be surprised if GSL, LAPACK, etc do not have parallel extensions now.

 Not the greatest learning experience, but certainly the most bang for
 the buck would
 be just to allow parallel versions in these libraries.
 Attacking the general "slowness" of interpretation is tricky.


A number of vendor-tuned BLAS libraries are multithreaded, usually
along with key BLAS subroutines (triangular & QR factorizations etc).
LAPACK itself is not, AFAIK.
There is no problem using these MTed BLAS libs (ACML, Intel MKL, Goto
BLAS) with Octave to exploit multiple cores when working with large
matrices.

Octave does not depend on GSL (but AFAIK, there is no parallelism in
GSL either).


 I sometimes invert 6000x6000 matrices, so the simple addition of a parallel
 matrix inverter would be a big help.
 Big principal-components problems are also common for me.


SVDs are more tricky to MT - usually, it's the orthogonal reduction
that can be parallelized.
I think that recent ACML comes with xGEBRD parallelized, but I'm not sure.


 Matlab seems to have parallel versions of some simple operations,
 like matrix multiplication, perhaps transpose, and point-by-point
 operations.


I guess that Matlab simply uses some multithreaded BLAS library to
speed key level-3 operations, like matrix-matrix multiply or matrix
solve. You can do the same with Octave.

Operations consisting primarily of memory traffic (like transpose) are
normally quite useless to parallelize.


 These would give you some experience in actually writing simple parallel
 code, if you are
 looking for that.

 Frankly, I go to Matlab for the big stuff on my main machine.
 I install octave everyplace else (no licensing) for short, simple tasks.
 I would certainly see a great benefit in adding some parallelism to some
 of the core matrix ops.


If you want to think beyond a quick ad-hoc solution, I think it would
be cool to develop and implement basic OpenMP support for Octave. This
would also need ensuring that all functions are thread-safe, which I
expect would be a hard task, perhaps harder than the language support
itself.


Hello all,

What about multithreading the mapper functions? I guess we're relying on external libraries to actually perform the computations of many of the functions, but I would assume (correct me if I'm wrong) that octave does the looping through the individual elements of an array. It would seem that it would be very straightforward to make the computation of something like cos([1:1000]) faster by just splitting up portions of large arrays and sending them to separate processors.

Quentin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]