octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: octave-mpi questions


From: John W. Eaton
Subject: Re: octave-mpi questions
Date: Tue, 18 Feb 2003 13:13:19 -0600

On 17-Feb-2003, JD Cole <address@hidden> wrote:

| The attached code follows your second idea of not modifying the octave 
| binary at, and letting the user call MPI functions such as MPI_Init and 
| MPI_Finalize. One problem I found with this approach is by allowing the 
| user to call MPI_Finalize (), analogous with mpi_quit, is that some MPI 
| implementations will not allow the user to call MPI_Init, or 
| mpi_startup, again and will cause Octave to prematurely exit, i.e. 
| core-dump, when MPI_Init is called a second time.

OK, so it seems that what I was hoping to be able to do may not be
possible (at least with current MPI implementations).

| > I like the idea of having the flexibility of starting and stopping MPI
| > processing on the fly, but I don't know whether that is practical.

| Are you speaking of, say, allowing a control-C to halt processing on 
| other nodes?

No, I was thinking that it might be possible to do the following:

 0. Start Octave without mpirun or any other special options.  At this
    point, Octave doesn't know anything about parallel processing,
    same as usual.

 1. Call some sequence of functions to start the MPI system
    (equivalent of lamboot and mpirun to start the other processes)
    and initialize MPI (MPI_init).  At this point, Octave knows about
    MPI and can send jobs to other processes and collect results.

 2. Do parallel tasks.

 3. Eventually, call some sequence of functions to stop the MPI
    system, terminate the other processes, and return to single
    processor mode.  The MPI code would remain loaded unless all the
    MPI functions are cleared.

 4. Go on with other tasks, possibly restarting the MPI processing
    (steps 1--3) as many times as desired without having to exit
    Octave.

If this is not possible given the current MPI implementations, is it a
fundamental limitation of MPI?

jwe



reply via email to

[Prev in Thread] Current Thread [Next in Thread]