vrs-development
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Vrs-development] Testing


From: Ian Fung
Subject: [Vrs-development] Testing
Date: Tue, 13 Aug 2002 19:31:02 -0400
User-agent: Microsoft-Entourage/10.0.0.1309

vrs developers,

i'm been thinking about vrs and designing the rm and i've realized
something. it is very important to have a structured way to test. since vrs
is going to be a very long and complicated project consisting of many
separate subprojects, we need a uniformed way to test each other's code. i
will describe what i think we should have for a testing environment.

i envision a testing framework/simulator (whatever you want to it) that
let's us run multiple instances of lds's. the tester has an option of using
the network (a port for each lds) or not using the network. when the tester
is not using the network, messages sent to the network will be passed to the
appropriate lds node. the tester will encapsulate each instance of a lds
node and monitor messages passed between them. aside from monitoring
inter-vrs messages, there should also be a mechanism to log activities
happening internally. now i'm not thinking about something crazy that
monitors the actual execution in memory. instead, it will just read logging
information the actual classes produce and display them on the tester. so
say there is some sort of configuration file. so let's say i make my
config.xml. the tester creates 5 lds nodes then does some file transfers and
installs an eod, all specified by config.xml. then the execution would
produce logs that showed the flow of execution. the trick is to timestamp
each log and display them in the order that they would most likely occur in.
that is the reason that this tester should be both a testing framework and a
simulator. network latency and other properties (available bandwidth, node
failures, etc) should be configurable to test the system in different
settings. in this respect it is a simulator. the tester also supports a
testing framework where test cases can be created and the results
interrupted in a user-friendly way.

well that's what i'm thinking about. let me know what you think. regardless
of what you think the tester should do, i believe we need something to ease
the pain of distributed testing.

-alias





reply via email to

[Prev in Thread] Current Thread [Next in Thread]