[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Tsp-devel] dtest: a distributed test framework
From: |
Eric Noulard |
Subject: |
Re: [Tsp-devel] dtest: a distributed test framework |
Date: |
Wed, 25 Apr 2007 16:29:24 +0200 |
2007/4/25, Frederik Deweerdt <address@hidden>:
> then you have one (or several) "local" python scripts which
> "only" handles ssh connection whose stdin/stdout/stderr are
> directly controlled by the local python scripts.
> Thoses scripts may barrier/ok locally.
I agree that the test results could come back through stdout, specially
if we use TAP, which only defines an output format.
However, I don't see how you'll get barriers from ssh: we want the
barriers to be inside the running code
It depends on what you consider to be
1) the test scripts
For current dtest it is python script (exe-ified with Cx_Freeze)
running on remote target
For me it is a python script running on local test machine
which drive the "tested program" running on remote host.
2) the tested program
For current dtest it is either the python script itself and/or
a program/libary called by the script running on the remote host.
For me it is a (potentially) binary program running on
on the remote host. (say tsp_stdout, tsp_stub_server and the like)
that is:
client server
wait_for_barrier(1)
do some setup
wait for connections
wait_for_barrier(1)
========== Do the actual testing ============
That's also why XMLRPC is interesting, is that you can report the test
results and use the barriers from whatever language you like.
Unless I miss your point your client and server program should
be able to do xmlrpc calls (in Python, C, etc...) in order to implement
barrier.
But we could simply think that barrier are implemented via
stdout/stdin monitoring:
wait barrier: print "barrier wait <barrierID>"
then block until stdin receive "barrier ack <barrierID>"
stdout/stdin handling may be done through the ssh link.
in fact I only borrow your idea of extended TAP with a
barrier wait id
barrier ack id
statement
> The remote target stdout results comes from SSH link.
> The remote target file results should be scp from target to
> dtest local host. (this is a weakness of my scheme)
OK, but we don't really need them if we've got stdout?
Unless your remote test program produce a file
(tsp_ascii_writer) and you want to check the content with
a reference file located on the dtest local host.
While writing, I was thinking that maybe defining a "typical" TSP test
could be helpful. Do we agree that the following is typical? In fact,
what we want dtest to be would depend on what we want to do with it :)
Yes I agree.
client server
TSP_consumer_init && OK(1) TSP_provider_init && OK(2)
barrier(1)
TSP_provider_run && OK(3)
barrier(1)
TSP_consumer_connect_url && OK(4)
TSP_consumer_[...] && OK(X)
barrier(2)
TSP_datapool_push_next_item && OK(X+1)
barrier(2)
barrier(end)
Get item, check value && OK (X+2)
barrier(end)
This scheme is interesting but it is not the first one I may
want to do, I would rather do:
server
export STRACE_DEBUG=xxx
launch tsp_stub_server
OK(1)
barrier(1)
scan stdout for tsp_stdout connection
barrier(2)
terminate tsp_stub_server
OK(3)
barrier(end)
client
barrier(1)
launch tsp_stdout_client -n 10
scan stdout for ERROR
OK(2)
barrier(2)
barrier(end)
For implementing the last scheme we dont need to touch tsp_stdout_client
and or tsp_stub_server code, we should only write python tests scripts
and ssh connection handling (this is the big part).
barrier may be implement locally because we need to synchronize
"the tests scripts" and not "the tested program".
And I agree
what we want dtest to be would depend on what we want to do with it :)
I think your scheme is well-suited for a kind of distributed units tests
and mine more for distributed integration tests.
I borrowed the idea of ssh connection "monitoring"
from expect (http://expect.nist.gov/).
I think both scheme are valuable we may implements both
beginning with what we (respectively) need first.
--
Erk