|
From: | Marcus G. Daniels |
Subject: | Re: [Swarm-Modelling] SWARM on Clusters |
Date: | Thu, 29 Jan 2004 10:44:46 -0700 |
User-agent: | Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.6) Gecko/20040113 |
Sunwoo Park wrote:
Swarm has a fine-grained knowledge of concurrency during a simulation. When multiple agents do something at the same timestep, Swarm knows this. But that's just a little atom of the whole simulation execution sequence. What this means is that in order for Swarm to exploit this knowledge on a parallel computer, it is necessary to be able to efficiently get that atom of computation to a physical processor. A cluster, like a Beowulf arrangement of PCs, can't do this because the communication expense of getting the atom to the processor not amortized by the computation done. A SMP or NUMA system can do this because the communication/overhead expense of getting the computation to the processor is small. So if you have a two or four or eight way Opteron or Sun system or a big NUMA system like a SGI Altix, the interconnect between processors could reasonably slurp up these atoms and there would be a scalability win.I just joined in this mailing list. I have a simple question regarding SWARM software. Is there any SWARM implementation that runs on cluster machines (or MPP machines) based on Message Passing Paradigm (e.g., MPI) ?
I think it would be hard to make a message passing system scale very well based on an architecture like Swarm. You'd need low-latency interconnect, maybe Myrinet.
In any case, Swarm doesn't implement either. A multithreaded Swarm would be feasible, but would assume a shared memory system like I mentioned.
[Prev in Thread] | Current Thread | [Next in Thread] |