[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Swarm Modelling] Re: The "Art" of Modeling
From: |
Darren Schreiber |
Subject: |
[Swarm Modelling] Re: The "Art" of Modeling |
Date: |
Sat, 15 Feb 2003 16:58:33 -0800 |
Pardon my cross-post, but I think is a really important topic and
wanted to encourage a few different groups of folks to chime in on the
repast list.
I have been giving a lot of thought to the problem of modeling and hope
to teach a class on it in the nearish future (first I get a job and
finish my Ph.D.) My ambition is to teach a course on Modeling that
looks at Formal Modeling (game theory type of stuff), Statistical
Modeling, and Computational Modeling. I think that the distinctions
among these forms of modeling are increasingly breaking down in the
social sciences. Evolutionary game theory can be hard to distinguish
from agent-based modeling, MCMC seems to have more in common with
agent-based modeling than ordinary least squares, and agent-based
models process/generate a lot of statistical data and often have
non-cooperative games at their core.
Lars-Erik Cederman taught me the incredible value of the Keep It Simple
Stupid (KISS) principle as he trained me in agent-based modeling. And,
I have tried to live by that in my model design. One of my favorite
quotes is from jurist Oliver Wendel Holmes "I don't care at all for the
simplicity on this side of complexity, but I'd give my right arm for
the simplicity on the far side." What we are usually looking for is
the far-side simplicity (not to be confused with Gary Larson's
implementation in comics.)
Breaking the phenomena into the smallest pieces possible has tremendous
advantages. Parsimony is obvious. If we can explain a lot with a
little, we have a great model. I like to call such models "high
leverage." The prisoner's dilemma is fabulous because this little
story can explain so many phenomena in society and in nature. A
parsimonious model may only explain some portion of the phenomena of
interest, but my experience is that in the process of cutting out
everything not absolutely essential, what remains is essential in the
sense that it is the essence of the problem thus important for many
other related problems.
The simplest model has the virtue of being doable. We avoid the
perennial problem of procrastination with models that are so finely
divided that doing the first step is simple. With a problem really
effectively dissected into the smallest components, we have very few
excuses for taking the first step, or the second... etc. Pretty soon
you will find you've accomplished something interesting. And, oddly
enough that interesting stuff might happen earlier than you expected
it. I have had a few occasions where an adequate explanation emerged
with much less machinery than I'd imagined it was going to take.
Building from a ridiculously simple model means that you have a greater
chance of finding a ridiculously simple explanation for the complex
phenomena. It also gives you and your audience a sense of the
cumulation of knowledge - e.g. "we need this bit to explain A, and this
other bit has to get added to explain B."
Another great advantage of the KISS approach is debugging. Rarely do
models work in the first instance. How often do our regressions make
sense at first pass? Or, do our programs compile at the first "make"
command? We can more easily evaluate the model to make sure it is
doing what we want when we can see it in little pieces. With only one
variable in our statistical analysis, we can easily tell if everything
is coded correctly.
Finally (although I am sure I am leaving out some other important
advantages), simple models are interpretable. Political scientist
Chris Achen argues for "A Rule of Three" (TOWARD A NEW POLITICAL
METHODOLOGY: Microfoundations and ART , Annu. Rev. Polit. Sci. 2002.
5:423-45) so that we generate models that we can actually wrap our
minds around and understand. When the parameter space of a model is
hyperdimensional, it becomes extremely difficult to make any sense out
of it. Steve Bankes at RAND has done some interesting work showing
that some seemingly intractable debates can actually be reduced into
just a few relevant questions. He uses a great program to just define
the scope of policy debates and map out where the real contentions are.
A high dimensional model may seem to fit the data nearly perfectly,
but be so fragile that even small measurement errors would break their
conclusions.
With the lessons on simplicity firmly in mind, I attended a talk by a
weather scholar at UCLA. He described the hundreds of differential
equations in his program and how dramatic the improvements over former
attempts have been. This made me incredibly nervous. Hundreds of
differential equations seemed to lead right into the problems of
atheoretic uninterpretability that Achen warns about. In response, our
weather expert said "our aim with this model is to save people's lives
and get them out of the way of floods and disaster, not to 'understand'
tornados."
This clarified for me the distinction between simulations and models.
I think of simulations as programs which endeavor to predict outcomes
with great fidelity. And, models help us to faithfully understand
processes and outcomes. Most projects will want some balance between
the goals of modeling and simulation. How much understanding do we
need? How much predictive power do we need? I think that a good
modeling process looks back and forth from one goal to the other
because advances in one area facilitate advances in the other. Jane
Azevedo has written a usefully book on modeling using the analogy of
maps "Mapping Reality: An Evolutionary Realist Methodology for the
Natural Social Sciences (Suny Series in the Philosophy of the Social
Sciences.)" A geological map, a topological map, a census map, a
subway map, a flight map, and a sketch on a napkin to get you to my
house from the restaurant are all maps for different purposes. They,
like models, should be evaluated by standards relevant to their
objectives. Bill McKelvey as UCLA's Anderson School done some great
thinking on a "Model Centered Science" arguing for model centeredness
as a good epistemological foundation for science in the current age.
A final thought is the importance of ambitions. While I always break
my model into really tiny pieces (version 0.00.01 usually creates ten
agents and has them report their ID number), I also want to dream big.
I imagine the coolest version of the project I can think of. What
would it look like? What would it do? How would it work? I then
"backward plan" from that ambition. What would the slightly less
ambitious version accomplish? I walk back until I get to a version
0.00.01 that I could write and evaluate in 5 minutes. This way, I can
expect that I can minimize the kludges as I try to make my model more
and more sophisticated from version 0.00.01 to version 1.0. Because I
have big ambitions, I will almost always choose the most general way of
coding something. I avoid constants like the plague. If I was writing
a model with a prisoner's dilemma at the core, I would parameterize it
so that I could easily transform it into another game by just changing
the payoff structure. I would also make all the agents have their own
payoff matrices so that I could change to heterogeneous payoffs once I
understood how homogeneity worked. Thus, in a later version, I might
think that we are playing a battle of the sexes while you think we are
in a prisoner's dilemma.
That's my more than 2 cents on the art of modeling. I would really
appreciate feedback (to me or to the repast list) since this is an area
I want to further explore.
Darren
On Saturday, February 15, 2003, at 03:20 PM, Steven Phelan wrote:
Jason is quite correct. Sterman has written a very nice book. However,
note
the movement, even in Sterman, away from solving actual problems to
using
modeling for 'thought experiments'. This is no coincidence. See my
paper "A
note on the correspondence between complexity and systems theory"
http://www.utdallas.edu/~sphelan/Papers/systems.html for an extended
explanation of this.
==============================
Steven E. Phelan, PhD
School of Management
University of Texas at Dallas
==============================
-------------------------------------------------------
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
_______________________________________________
Repast-interest mailing list
address@hidden
https://lists.sourceforge.net/lists/listinfo/repast-interest
- [Swarm Modelling] Re: The "Art" of Modeling,
Darren Schreiber <=