swarm-modeling
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Swarm-Modelling] ABMs on Graphical Processor Units


From: Marcus G. Daniels
Subject: Re: [Swarm-Modelling] ABMs on Graphical Processor Units
Date: Fri, 28 Dec 2007 21:07:45 -0700
User-agent: Thunderbird 2.0.0.9 (X11/20071115)

Russell Standish wrote:
Then it doesn't matter if MPI is kept in local store - only the
bits of the program heavily used will matter, and typically MPI is at
its most successful when communication costs are much less than
computation costs.
Sorry if I sound negative about the idea of using MPI. I like it. I just think it may be more investment than you think. It would be useful to get a positive yes or no on the question as there are lots of potential MPI users that I'm sure would like that portability path. As far as I know, from asking around online and at the lab, no one has really made the effort seriously. I spent several days on getting OpenMPI built with automatic overlays, and while I can say overlays work (after you build the whole runtime and toolchain from the ground up using new tools), the packaging for OpenMPI was complicated enough (libtool and all that) that I gave up, as it was pretty obvious that a libspe2 port of my MPI code was the easier path.

For a lot of applications, I'm expect there's bandwidth to waste on the EIB (the ring that connects the SPUs), but at some point things have got to break down. I would also expect some pretty casual use of heap space in modern MPI implementations where 32k is nothing. Perhaps one could get a sense of the footprint of popular MPI implementations using oprofile or the like to see how much of the space was hot. To me it looked easier to adapt codes to libspe2 than to adapt MPI to overlays.

Btw, there is a shared memory mapping between the PPUs and the SPUs, but deliberate message is much more efficient.

IBM has good documentation in the SDK on all of this.
Marcus


reply via email to

[Prev in Thread] Current Thread [Next in Thread]