chicken-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Chicken-users] Re: Chicken MPI


From: Ivan Raikov
Subject: [Chicken-users] Re: Chicken MPI
Date: Sat, 08 Mar 2008 23:02:50 +0900
User-agent: Gnus/5.11 (Gnus v5.11) Emacs/22.1 (gnu/linux)

  Well, the mailbox protocol is rather limited; I think you are much
better off learning at least some of the group communication semantics
of MPI, because they make a lot of sense to a practitioner of
functional programming languages.

  The basics of MPI are not very complicated. The general idea is that
the MPI dispatcher creates a number of identical processes that run
instances of your program -- similar to using the fork syscall, only
of course in this case the different processes can be created on
different nodes in a network. Each process gets its own id, and you
have communication primitives for process-to-process messages, and for
group communication (one process to many, and vice versa). So, to use
the example from the MPI egg documentation:


    (MPI:init) ;; initialize the MPI library

    ;; obtain default communicator (contains all processes)
    (define comm-world  (MPI:get-comm-world)) 

    ;; the number of MPI processes created
    (define size        (MPI:comm-size comm-world))

    ;; the rank of the calling process
    (define myrank      (MPI:comm-rank comm-world))

In this example, after initializing the library, the program can
obtain the id of the world communicator -- the object that can be used
to send a message to all MPI processes currently running. Then the
program queries the communicator for the actual number of processes
running, and for its own rank (the rank of the calling process). A lot
of MPI applications follow the master-worker model, where the process
with rank 0 sends messages to the other processes, and collects data
from them. So you would typically have code like this:

  
  ;; Barrier 
  (MPI:barrier comm-world)

  (if (zero? myrank) 
    ;; if we have rank 0, send a message to all other processes, 
    ;; and wait for a response
    (let ((data  "aa"))
      (print myrank ": sending " data)
      (MPI:send (string->blob data) 1 0 comm-world)
      (let ((n (MPI:receive MPI:any-source MPI:any-tag comm-world)))
        (print myrank ": received " (blob->string n))))
    ;; processes with non-zero rank wait to receive a message, and
    ;; then send a response to all processes
    (let* ((n   (blob->string (MPI:receive MPI:any-source MPI:any-tag 
comm-world)))
           (n1  (string-append n "a")))
      (print myrank ": received " n ", resending " n1)
      (MPI:send (string->blob n1) (modulo (+ myrank 1) size) 0 comm-world)))


The call to MPI:barrier is the way to define synchronization points in
MPI. But unlike POSIX threads and other models of concurrency, the
programmer does not have to mess around with mutexes and so on. The
MPI dispatcher takes care to ensure that all instantiated MPI
processes reach the barrier before proceeding further with the
program. From the programmer's standpoint, this is quite convenient.

   Then, the program checks whether it is running under the process
with rank 0, and if so it assumes the "master" role, otherwise it
assumes the "worker" role. The master sends out a message to all
processes, and then waits for any responses and prints them out. The
workers wait to receive a message, then append the letter "a" to that
message and send to process of rank n+1, where n is the rank of the
current process. This is the simplest mode of MPI communication. There
are many group communication procedures, and some of them act in a
manner similar to map, fold and the other list combinators that we
know and love -- only the "map" can be across hundreds of nodes in a
network, and it is all completely seamless.  So consider looking at
the documentation, and I can provide more sophisticated examples, if
you are interested.


    -Ivan



"Graham Fawcett" <address@hidden> writes:

>>  > On reflection, I'd much rather see a really efficient IPC system like
>>  > this, rather than having a native-threaded Chicken.
>>
>>   Try the MPI egg! (Yet another shameless plug, I know).
>
> Not a shameless plug, IMO -- I had never really looked at MPI. Thanks!
> For newbies like me, I'd love to see an MPI-for-dummies egg that
> overlayed it with a simpler, familiar protocol (like mailbox).




reply via email to

[Prev in Thread] Current Thread [Next in Thread]