[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: models in the wild (was Re: [Swarm-Modelling] Re: [Swarm-Support] Re
From: |
Rick Riolo |
Subject: |
Re: models in the wild (was Re: [Swarm-Modelling] Re: [Swarm-Support] Repast vs. Swarm) |
Date: |
Thu, 29 Jan 2004 21:38:03 -0500 (EST) |
Just one more note:
I call what i do "modeling" as opposed to "simulation"
in order to take the emphasis off the criteria of mimicing outputs.
Ie, it reminds me that i am simplifying both the
mechanisms *and* the outputs.
I like Holland's description of modeling as
being like drawing political cartoons --- we emphasize,
even to the point of exageration, certain aspects
of a system, de-emphasizing and ignoring other aspects,
in order to better understand *one part* of what is "fundemental"
about the system under study.
For me, "simulation" conotates systems like "flight simulators"
where the more details are matched, the better.
For the kinds of models i am interested in,
that is not the right criteria.
Along those lines, I point my students to the Borges story,
"On Exactitude in Science", in Dream Tigers i think.
And i guess i do put a fair amount of emphasis on
the criteria that the mechanisms in my model
should be plausible, if simple, versions of the
mechanisms i think are going on in the system being modeled.
So its not "no holds barred" to mimic some desired
dynamics or structures.
- r
--
Rick Riolo address@hidden
Center for the Study of Complex Systems (CSCS)
4477 Randall Lab
University of Michigan Ann Arbor MI 48109-1120
Phone: 734 763 3323 Fax: 734 763 9267
http://cscs.umich.edu/~rlr
On Thu, 29 Jan 2004 address@hidden wrote:
> Date: Thu, 29 Jan 2004 17:55:05 -0800
> From: address@hidden
> Reply-To: address@hidden
> To: address@hidden
> Subject: Re: models in the wild (was Re: [Swarm-Modelling] Re:
> [Swarm-Support] Repast vs. Swarm)
>
> Rick Riolo writes:
> > But I think *another* reasonable direction to go is to treat ABM's
> > as *models*, just as diffyQs or other formalisms (or informalisms)
> > are models, i.e., a simplified version of some "real" system,
> > which (more or less) generates some of the same dynamics,patterns,etc,
> > of interest in the real system, which needs "verifcation" and
> > "validation" as appropriate given the modeling goals, and given
> > the possibilities (given the complexity of the system being modeled...
> > see Steve Bankes on "Deep Uncertainty").
> >
> > A thought provoking article in the ALife literature that
> > covers many of these issues is:
> >
> > Di Paolo, Nobe and Bullock. Simulation Models as Opaque Thought
> > Experiments (Postscript) Artificial Life VII: The Seventh International
> > Conference on the Simulation and Synthesis of Living Systems, Reed
> > College, Portland, Oregon, USA, 1-6 August, 2000.
> >
> > But as i say, maybe i'm missing glen's point.
>
> Yep. You're getting the point. (And the Di Paolo reference is right
> on the money.) You may just disagree or think I'm being
> extremist... [grin] which, to some extent, I am.
>
> The difference between continuing down the simulation path and just
> doing a "proper job of it" versus abandoning the concept of simulation
> entirely amounts only to one thing: methods.
>
> Because the teleology of simulation is "to mimic", there are no holds
> barred on getting a model to behave like the referent. You can use
> diffeqs, expert systems, the moorlocks behind the refrigerator, etc.
> And this is all considered OK because your goal is to mimic. Then,
> once you've mimicked it successfully, you go about doing things like
> sensitivity and robustness analysis, run it live to see if you can
> make useful predictions off it (that save you money/time), etc.
>
> It's the no holds barred aspect of simulation that prevents it from
> leading to pedigreed methods that work. (The same is true of software
> development -- witness that even _today_ we have pompous people
> running around inventing new "paradigms" like aspects, stories, etc.)
> You end up, at the end of the day, saying "Oh sure, Bob added a few
> fudge-factors here and there; but, it verifies in all the right
> places!" And then, unless you get the source code and, possibly, the
> machine upon which the code ran, the right compiler, the right
> parameter settings, the right input, etc., 10 years from now, nobody
> will have any clue what you did to make it work.
>
> What we need is to back off and treat the things we've created as if
> we didn't create them. (The crux of Di Paolo's paper doesn't do this.
> It relies fundamentally on "a plausible mechanism" for the patterns of
> interest.)
>
> If you think about something like a "chair", "hammer", or a "beaker",
> the creator is completely separated from the user. I can use a
> speaker for a chair, a monkey wrench for a hammer, and a french-press
> for a beaker. When we think "simulation", we force the inscription of
> the purpose. When we think of a computational device, we can think in
> terms of simply "what it does" and not "what it was meant to do".
> This might lead to more reusable systems in the end. I believe it's
> the reason the unixen are still around. "Do one thing well" is
> another way to say what I'm getting at... you could elaborate on that
> mantra and say "Do one thing well and don't spend any time thinking
> about how those idiot end-users will abuse your program."
>
> Now, for your basic point, which I read as "Sure, you could do that
> and maybe make progress; but, you could also go down the current
> simulation path and make progress, too." Well, i'd be insane to argue
> with that. [grin] But, my point is not to denigrate simulation as a
> practice. My point is to hilight another approach that might help.
>
>