swarm-modeling
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: A whiff of reality...


From: Darren Schreiber
Subject: Re: A whiff of reality...
Date: Tue, 2 May 2000 18:26:00 -0700

This is like the old statistics anecdote about increased lemonade sales and
increased drowning.  Once we have a model, we want to be sure the program
implements what have intended to model.  Then, we want to check the output
and assumptions of the program/model against empirical data as much as
possible.  At this point, we haven't proven anything, but like statistics,
we have rejected the null hypothesis -- i.e. we can say that if our inputs
are correct, and if our model represents the interaction of relevant
forces, then we can have some expectations about the outputs.  Model
misspecification and data mismeasurement are still serious problems even in
statistics.

In statistics, the brightline rule of a 5% confidence interval (debate
merits of this later) still suggests that one in twenty of our rejections
of the null hypothesis could be based upon a purely random chance.  So, we
don't have proof  since we don't have a closed form solution.

But, a good model, is robust.  It can explain the phenomena that we set out
to explain.  And, it can explain new data as well.  A model that can deeply
explore a phenomena, broadly encompass a range of phenomena, or both is
highly valuable.  This is true in math, literature, law, philosophy, hard
science, and social science.  Shakespeare, the Psalmists, and Lao-Tsu wrote
robust models of the human experience that been highly valued across time,
space, and culture.

This returns me to the KISS principle (Keep It Simple Stupid).  If our
models are based on more fundamental elements and processes, they will be
more likely to produce the depth and breadth of understanding that make
"good" models.

Consider a "good" law.  Attempts to give formal proof of the value of a law
will just fail.  An excellent law will be based upon fundamental principles
and appeal to a sense of justice in a variety of factual or even cultural
contexts.

In domains where the concept of proof is sensible (i.e. closed systems),
then we  should benefit from the utility of the proof.  In other domains,
we need to borrow from the strength of hermenutic process developed in the
context of literature, theology, and law.  Each of these, like model
building, is an interpretive exercise.

We do not (despite the arguments of relativists) need to fall victim to a
non-critical perspective by understanding that we are engaged in an
interpretative exercise rather than proof.  The convention of a 95% percent
confidence interval is still a convention and not a proof.

Modeling will benefit from similar easy to interpret conventions, and an
understanding that they are conventions.

        Darren


>Doug Donalson wrote:
>>
>> Here is a reference I ran across when writing the final chapter of my
>> dissertation.
>>
>> Rykiel, E.J., 1996. Testing ecological models: the meaning of validation.
>> Ecol. Model. 90, 229-244.
>>
>> Although the it is about "validation", at least it's a whack at applying a
>> structure to one of these problems.  I have had several (actually, many more
>> than several) arguments with my advisor and other committee members about
>> including "verification" procedures in at least in a paper appendix.  The
>> general feeling is that this would be like experimentalists including
>> details of instrument calibration procedures.  (Ecologists make Joe Friday
>> look like a light weight with "Just the facts, ma'am".  Nothing but that
>> which is essential for the development of the idea in the paper should be
>> included, and model testing, at this point is considered a distraction.)  I
>> personally don't agree until we have verification methods as well
>> delinieated as those for experimental instruments.  (Of course, in all
>> cases, you still have to apply the methods.)
>>
>Just to add my tuppence-ha'penny's worth (English for two cents), there is
>an interesting paper about validation and verification by Naomi Oreskes and
>others in Science vol.263 (1994) pp 641-646. They argue that except in a
>closed system, validation of models is not possible, since you are committing
>the logical fallacy of affirming the consequent:
>
>Assume: If any model is valid, then it will predict the data
>Assume: My model predicts the data
>Conclude: My model is valid
>
>Gary
>
>--
>
>Macaulay Land Use Research Institute, Craigiebuckler, Aberdeen. AB15 8QH
>Tel: +44 (0) 1224 318611               Email: address@hidden
>
>
>                  ==================================
>   Swarm-Modelling is for discussion of Simulation and Modelling techniques
>   esp. using Swarm.  For list administration needs (esp. [un]subscribing),
>   please send a message to <address@hidden> with "help" in the
>   body of the message.
>                  ==================================


_____________________________________________

                 Darren Schreiber
                  Attorney at Law
                 Graduate Student
             Political Science, UCLA
                address@hidden
        http://www.bol.ucla.edu/~dschreib


                  ==================================
   Swarm-Modelling is for discussion of Simulation and Modelling techniques
   esp. using Swarm.  For list administration needs (esp. [un]subscribing),
   please send a message to <address@hidden> with "help" in the
   body of the message.
                  ==================================


reply via email to

[Prev in Thread] Current Thread [Next in Thread]