swarm-support
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Robustness Check


From: Benedikt Stefansson
Subject: Re: Robustness Check
Date: Thu, 08 Jul 1999 20:34:14 +0200

Alessandra Cassar wrote:

> I need to find an efficient way to check that my results (for a
> computational model using Swarm) are robust.

<snip>

> There must be an easier way to do it. Is there anyone that has already
> done this kind of robustness check?

I second Jan's suggestion that you don't create the random seeds in a
file, use an 'ExperimentSwarm','ModelSwarm' approach a la
swarm-bug-tutorial with a single random seed in the random number
generator in the 'ExperimentSwarm' which feeds seeds to the
'ModelSwarm'. (I can send you a beta version of an 'ExperimentSwarm'
class that reads 'sweep control' files and controls the execution of
ModelSwarms, somewhat more sophisticated than the approach in the
tutorial.)

The bottom line is that you will need to use a seperate tool to analyze
the data. Excel would be my last choice. Marcus suggested Awk, which
works. I heartily recommend learning Perl for this kind of stuff. I have
no experience with HDF5 and R yet, unfortunately the HDF5 features in
Swarm are obscure (to me) in the absence of documentation. (I know, I
know, RTF-HDF5-M...)

Sometimes it is just easiest to go the hardcode way, i.e. in this case
decide which bits you want to store from each run (averages, totals
etc.) - after all you can always recreate each run from the random
seeds.

However when you don't want to hardcode the data manipulation into the
Swarm program my approach is:

(1) Control the multiple executions either from ExperimentSwarm or an
external script (e.g. bash,awk,perl).

(2) Save the datafiles for each execution in a seperate subdirectory,
using the filesystem as a 'poor mans database' (e.g. directorynames such
as 070899-a0.1-b0.2/,070899-a0.1-b0.3/ etc.) Thus you can browse easily
through the data, and store all the pertinent files, model.setup,data as
etc. as sanity checks.

(3) Hack a general purpose Perl script that 'slurps' up data from such
directory structures. Since you are likely to want different datafilters
for each experiment, these filters can be stored as seperate perlscripts
(e.g. datafilter.pl) and then the script the filter to each subdirectory
using Perls eval() function, i.e. eval(datafilter.pl) executes the
commands in datafilter in current directory.

Regards,
Benedikt
 
PS. For steps (1)-(3) I have very poorly documented Perl solutions which
I can send to interested parties...
-----
Present coordinates: 
Dep. of Economics, Univ. of Trento, Via Inama 1, 38100 Trento, ITALY
Off: +39 0461 882246/267875 Mob: +39 347 0415721 Fax: +39 0461 882222

                  ==================================
   Swarm-Support is for discussion of the technical details of the day
   to day usage of Swarm.  For list administration needs (esp.
   [un]subscribing), please send a message to <address@hidden>
   with "help" in the body of the message.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]