help-octave
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Octave MPITB for Open-MPI


From: Javier Fernández
Subject: Re: Octave MPITB for Open-MPI
Date: Thu, 19 Jul 2007 20:48:38 +0200
User-agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.2) Gecko/20040804 Netscape/7.2 (ax)

Riccardo Corradini wrote:

Hi Javier I successfully compiled your toolbox under latest ubuntu feisty

I simply modified the last line of mpi.h from openmpi-dev package

#include "openmpi/ompi/mpi/cxx/mpicxx.h"
instead of
#include "ompi/mpi/cxx/mpicxx.h"

I would like to include it into a remaster of the latest parallelknoppix to test it more properly.

Hi Riccardo

I'm not sure I understand your remarks. I can't change mpi.h since it comes with Open-MPI. Neither openmpi-dev since I'm not the packager. I don't clearly see that ubuntu fesity users must hand-edit openmpi-dev package files. I cannot be of any help there, or with Michael's P-KPX. Sorry.

How could I adjust lam-bhost.def for dual core computers..repeating the name of the host?
Thanks a lot
Riccardo

Not sure about cores, I have not tried anyone. On biprocessors, the answer is yes, or use slots. It's probably easier trying it out than trying to document it, but here it goes anyways:

- man mpirun (1), section "Process slots".
- ompi FAQ, section "running", Q#10 and following
   http://www.open-mpi.org/faq/
   http://www.open-mpi.org/faq/?category=running
   http://www.open-mpi.org/faq/?category=running#mpirun-options

but as told above, you'll spend less time trying it out. I obtained this output running the "Hello.m" demo with the following hostfile

$ cat hf
h1
h2 slots=2
h1
h3

Notice I premeditately used a slot=2 inbetween two slot=1 lines. All nodes h1-h2-h3 are biprocessors

$ mpirun -c 1 -hostfile hf octave -q --eval Hello
Hello, MPI_COMM_world! I'm rank 0/1 (h2)

$ mpirun -c 2 -hostfile hf octave -q --eval Hello
Hello, MPI_COMM_world! I'm rank 0/2 (h2)
Hello, MPI_COMM_world! I'm rank 1/2 (h2)

$ mpirun -c 3 -hostfile hf octave -q --eval Hello
Hello, MPI_COMM_world! I'm rank 2/3 (h1)
Hello, MPI_COMM_world! I'm rank 0/3 (h2)
Hello, MPI_COMM_world! I'm rank 1/3 (h2)

You'll have probably guessed I'm mpirun-ing from h1, so the message 2/3 arrived earlier than those from h2. Only when 3 copies are involved, is h1 used. The 4-copy run uses the other h1 slot:

$ mpirun -c 4 -hostfile hf octave -q --eval Hello
Hello, MPI_COMM_world! I'm rank 2/4 (h1)
Hello, MPI_COMM_world! I'm rank 3/4 (h1)
Hello, MPI_COMM_world! I'm rank 0/4 (h2)
Hello, MPI_COMM_world! I'm rank 1/4 (h2)

address@hidden Hello]$ mpirun -c 5 -hostfile hf octave -q --eval Hello
Hello, MPI_COMM_world! I'm rank 2/5 (h1)
Hello, MPI_COMM_world! I'm rank 4/5 (h3)
Hello, MPI_COMM_world! I'm rank 3/5 (h1)
Hello, MPI_COMM_world! I'm rank 0/5 (h2)
Hello, MPI_COMM_world! I'm rank 1/5 (h2)

address@hidden Hello]$ mpirun -c 6 -hostfile hf octave -q --eval Hello
Hello, MPI_COMM_world! I'm rank 2/6 (h1)
Hello, MPI_COMM_world! I'm rank 3/6 (h1)
Hello, MPI_COMM_world! I'm rank 4/6 (h3)
Hello, MPI_COMM_world! I'm rank 0/6 (h3)
Hello, MPI_COMM_world! I'm rank 1/6 (h2)
Hello, MPI_COMM_world! I'm rank 5/6 (h2)

With LAM you didn't want to oversubscribe nodes. With OMPI, you _*really*_ don't want to oversubscribe them. RTFM.



Javier Fernández <address@hidden> ha scritto:

    [...] Test reports are welcome, since this is an initial
    release.

I forgot to ask that test reports be sent to me directly instead to the whole list. I'm not sure most Octave users are interested in MPITB. Also recall who are contributors to other works (last slide in...)
http://www.gelato.org/pdf/apr2006/gelato_ICE06apr_octave_krishnamurthy_osc.pdf
as mentioned in other thread
http://www.cae.wisc.edu/pipermail/help-octave/2006-October/001860.html

Hope nobody here gets upset to MPITB, thanks for your reports, please send them directly to me :-)

-javier

P.S.: Taking advantage we are now talking about MPITB, if some list subscriber remembers this thread
http://www.cae.wisc.edu/pipermail/help-octave/2007-January/002787.html
the new MPITB recommendation of using mpirun has been done to honor the admirable patience and perseverance of the OP - she managed to overcome all the problems and get it working!!!
http://atc.ugr.es/%7Ejavier/investigacion/papers/mpitb_octave_papers.html#PHClab



reply via email to

[Prev in Thread] Current Thread [Next in Thread]