[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: simulating large models
From: |
Paul Johnson |
Subject: |
Re: simulating large models |
Date: |
Tue, 18 Jun 2002 18:38:10 -0500 |
User-agent: |
Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.0.0) Gecko/20020606 |
I waited for the dust to clear on this thread because I was eager to see
what answers emerge.
I don't think the number of agents will cause any unexpected scalability
problems if you write the model in Objective-C. In C you can allocate
memory and free it explicitly, and you are not dependent on a garbage
collector to do the right thing.
In Java, I personally do encounter several scalability problems that I
have not been able to understand. The size of the JVM gets huge and
many of the tricks that I expect to help don't make any difference. For
example, converting IVARS that are common among a class of agents into
static Class variables has no effect. Converting doubles to single
precision floats and changing ints to small and so forth yields no
noticeable advantage for me.
I don't think these problems I have in scaling with Java are due to
Swarm, but rather Java itself and my relative lack of incentive to learn
my around in it. Over the last year and a half, Marcus Daniels has
posted some memos about how to deal with some Swarm related Java memory
problems and users indicate they work pretty well. One problem was that
the Java garbage collector never could figure out when to drop things,
and there were a couple of other issues that, frankly, flew past my ears
and never caused me to look further. I have a collection of those notes
somewhere.
Now, in terms of scheduling agents and the Swarm activity library, I
think you will see huge differences in performance depending on how you
structure your simulation. If you have dynamic scheduling, I see some
major performance differences across theoretically equivalent designs.
It is BY FAR faster to just write a loop over all the agents and tell
them to do something at each time step than it is to employ
createActionForEach or createFActionForEach or whatever, especially if
you want randomized order of traversal. Going at it the "loop way" does
not, however, interleve actions of diverse sets of agents in a
meaningful way, and if you need that kind of thing, you will see a
performance hit that will frustrate you a lot. I have worked on that
question some and talked about it at Swarmfest in the context of
alternative approaches for dynamic scheduling. There are occassions
where you really do need the hierarchical dynamic schedules, and
especially the use of randomization in them has a pretty big performance
hit.
http://lark.cc.ukans.edu/~pauljohn/ResearchPapers/Presentations/Swarmfest02/DynamicScheduling/
That reminds me I made a promise to develop a couple of methods for
collections to speed up use of randomized collection indexes and I will
do it. I also am trying to understand some elements of design in the
Activity library to see if there is a way I can make the think I called
decentralized dynamic scheduling go faster.
Juan A. Rodriguez wrote:
Hi there,
Does anyone have experience with simulating large models with swarm?
Is swarm prepared to distributed simulations across several
processors/machines?
We're thinking of large market models which may include hundreds (or even
thousands) of customers (agents) and
we foresee scalability problems.
Perhaps we should use some high performance simulator, but we're not sure
about it.
Any hint or suggestion will be welcome.
Thanks.
Juan A. RodrÃguez-Aguilar, PhD
Senior Researcher
--
Paul E. Johnson email: address@hidden
Dept. of Political Science http://lark.cc.ku.edu/~pauljohn
University of Kansas Office: (785) 864-9086
Lawrence, Kansas 66045 FAX: (785) 864-5700
==================================
Swarm-Support is for discussion of the technical details of the day
to day usage of Swarm. For list administration needs (esp.
[un]subscribing), please send a message to <address@hidden>
with "help" in the body of the message.