octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Possible (summer of code) projects for Octave


From: Daniel Kraft
Subject: Re: Possible (summer of code) projects for Octave
Date: Tue, 04 Jan 2011 19:49:01 +0100
User-agent: Thunderbird 2.0.0.0 (X11/20070425)

Jaroslav Hajek wrote:
On Mon, Jan 3, 2011 at 8:10 PM, Daniel Kraft <address@hidden> wrote:
Hi all,

I'm interested to apply for Google Summer of Code with Octave next year
(provided there will be another GSoC and GNU is again accepted as mentoring
organization, but I guess both will be true) and in general to look into
contributing to Octave (although I don't know how much spare time I can
spend on it outside of something like GSoC at the moment).


IIRC, mentors from Google itself are also acceptable under some
conditions. If that's still true, I could volunteer as a mentor. I
used to be a very active Octave developer not so long ago :)

I think that's true -- however, I guess that an application for the GNU project (in case it is again accepted) would also do and you (or someone else from Octave) could mentor for it. This is how I worked on GNU Guile two years ago.

If you compiled with (almost) all dependencies, that's already a minor
achivement :)

Well, I did with what I need -- and this in several steps, but now I have at least SuiteSparse, QHull and ARPACK working; maybe something else, but if so I just forgot that. :)

Ok, sorry so far for the off-topic description of myself.  What I wanted to
ask is whether there are some ideas for projects to work on that could fit
for GSoC or in general something which is possibly not "fixing bugs" or
doing a lot of "minor" improvements but seperate "new" things to work on and
getting started.

My favorite is implementing the OOP versions of delaunay triangulation
and interpolation functions. See
http://www.mathworks.ch/help/techdoc/ref/delaunaytriclass.html
http://www.mathworks.ch/help/techdoc/ref/trirepclass.html
http://www.mathworks.ch/help/techdoc/ref/triscatteredinterpclass.html

in contrast to the existing delaunay et al. functions, the OOP
approach is not only fancy, it allows you encapsulate & reuse more
important topological data to make things like lookup & interpolation
way faster.
For instance, if you want to triangulate an area nad then lookup
enclosing triangles for a set of points, you'd use delaunay & tsearch
in Octave; the problem is that the latter is sadly inefficient because
it's not able to accept any more information than a plain list of
triangles from the delaunay triangulation (it can't even assume the
triangulation is delaunay).

This actually sounds quite interesting! Although I did not yet work at all with OOP in Octave (or Matlab), but I guess this would make it even more interesting and beneficial and should not be too hard to get into.

I'm just not sure how "important" this would be for Octave itself -- delaunay triangulation and related stuff seems like a more or less specific topic; although there was also a reply that this could help OOP in Octave in general (but at the moment I do not know how).

Are you aware of more cases like this? Then one could think about a more general project to implement some OOP interfaces to functions.

Still,
maybe there are currently some ideas or motivation to try something into
that direction?  Otherwise, I also read something about attempts to make use
of multiple cores -- this sounds interesting, too, are the ongoing projects
towards (or interest in) that?  Some other things that would be useful for
Octave and the user community?


I think what Octave needs to be considered seriously in the future is
better support for high-level parallelism. Low-level cheap stuff like
parallelizing sum() is laughable, IMHO - might be good for MW to boost
their PR image, but in real life that will win you almost nothing. I
and D. Bateman actually experimented with code to parallelize some
element-wise matrix operations, and the results were, IMHO, poor.
Parallelizing costly built-ins like pinv() or fft() is a good idea,
but that's usually best handled by libs that implement them (BLAS,
FFTW). Some mappers (e.g. erfinv) may also fall into this area, and
that should not be hard to do.

So it seems that here we either have stuff that is not really useful (like the element-wise operators or sum), or is / is to be handled by an external library (FFT, more complex BLAS, LAPACK). For the mappers, there seems to be already some existing code and efforts.

The main problem with this is that the interpreter is not remotely
thread-safe, which makes high-level multithreading impossible. One
option is to use multiprocessing instead - look at general/parcellfun
or openmpi on OctaveForge. The other option, of course, is to make
Octave thread-safe. That would be a big enough project without any
doubt, I'm just not sure it's academic enough.

Hm, this also sounds like something really useful. Maybe this could help in general with parallelization projects (whatever they are in the end) or for user-side, high-level parallelization (like, for instance, OpenMP-style declarations for parallel loops and the like -- although I'm not sure how useful this would be in Octave).

And in order to make it more "interesting" or "academic" (in case this is really needed), one could just add some efforts building on this towards parallization ;)

Cheers,
Daniel

--
http://www.pro-vegan.info/
--
Done:  Arc-Bar-Cav-Kni-Ran-Rog-Sam-Tou-Val-Wiz
To go: Hea-Mon-Pri


reply via email to

[Prev in Thread] Current Thread [Next in Thread]