vampire-public
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Vampire-public] Comments about recent news & some ideas about the futur


From: Nicolas Burrus
Subject: [Vampire-public] Comments about recent news & some ideas about the future
Date: Wed, 25 Feb 2004 00:59:27 +0000
User-agent: KMail/1.6

Hi all,

I am sorry, I've been very busy those days, and I couldn't take the time to 
follow vampire news :(
I'm happy to see some activity concerning Vampire though :)

I think the website is ok, but maybe we should stress a little bit more the 
fact that the current release is more a snapshot than a real release. The 
documentation is incomplete but also obsolete (it deals with daemon stuff for 
example), the configuration files syntax may change again before the next 
release, etc. I think that we should clearly warn a potential final user that 
he will probably run into some issues if he tries to use this version of 
Vampire.

I was also thinking about the configuration files, and I found that the 
current approach is quite problematic. It was a really good thing to separate 
the host part from the tarball specific options, but I think that we need 
closer links between the two parts.

I'll take for example the typical configuration I'd like to have at the LRDE, 
which I think quite representative. We can access to about 20 machines at the 
lab, most of then are Linux, 1  runs FreeBSD and 1 MacOSX. Besides, we can 
access the whole EPITA network, with a lot of other machines. EPITA hosts are 
set up from a dump, which means that almost all hosts are identical. This is 
not the case for the LRDE, where everyone is responsible of his own machine. 
This results is some disparities between the machines, some are more up to 
date than others, some have the xxx software, other don't, etc.

Now let's say we want to check Olena and Transformers. Olena is written in 
C++, but also has optional parts in Python. Transformers use a set of tools 
named XT. I'd like to share at many information as possible between the 
configuration files for Olena and for Transformers.

The current version of Vampire has the concept of host class, but the problem 
is that the discrimination we want to make between machines depends on the 
package to be tested. For Olena, a criteria could be the version of the C++ 
compiler, which is of no use for Transformers, and vice & versa, for Olena 
the version of XT does not matter. For Olena, swig and python versions may 
also be a discriminative criteria to create host classes.

So what's the point? We would like to share the host informations for Olena 
and Transformers. We know that we will always access r2d2 on EPITA network by 
using ssh with the login foo_b. We also know that r2d2 runs NetBSD, and is 
quite fast. This is the kind of information we know for sure, and which is 
relevant for all the packages we will check with Vampire. But currently we 
also describe some software specific features in the host configuration file, 
such as the C++ compiler. But if we want to use it with Transformers, we 
should also describe the XT version. And we end up with the union of all the 
specific needs one package might use. And it is almost impossible to create 
host classes, since we don't know how to discriminate hosts.

That's why I think that it would be better to have only 
hardware/network/really tarball independant stuff in the host configuration 
file, with arbitrary host classes, just regrouping the hosts by network 
properties for example to factor the configuration file.

And then, it is up to the tarball configuration file to create virtual host 
classes, corresponding to the specific needs of the package to be tested. You 
may think that it's a pity to have to take care about host specific 
information (software versions, etc.) in the tarball configuration file, but 
I really don't see how to avoid it.

This approach would result in much more flexibility and reusability I think. 
Another nice improvement would be the ability to put the same hosts into 
different virtual classes, thereby allowing to test different set of 
parameters on the same machines by simply creating several virtual classes 
sharing the same hosts.

Here is an example of configuration files I was thinking about (this is a 
first draft, I may introduce silly things, do not hesitate to tell me :-):

hosts.xml
==============================================================================  
<!-- HOST CLASS' DEFINITIONS -->
  <!-- DEFAULT HOST -->
  <class name="default">

    <var name="LOGIN" value="mylogin"/>
    <var name="tmpdir" value="/tmp/vampire_tests"  />

    <connection_command>
      <command priority="20">ssh %(LOGIN)address@hidden(HOSTNAME)s 
bash</command>
     </connection_command>

    <upload_command>
      <command priority="20" loc="local">scp %(LOCALFILE)s 
%(LOGIN)address@hidden(HOSTNAME)s:%(TMPDIR)s</command>
      <command priority="15" loc="remote">wget %(HTTP_URL)s</command>
     </upload_command>

    <!-- Commands that return hash of a file, used to validate the upload -->
    <md5cmd_local>md5sum %(LOCALFILE)s | cut -d" " -f1</md5cmd_local>
    <md5cmd_remote>md5sum %(REMOTEFILE)s | cut -d" " -f1</md5cmd_remote>
  </class>

  <!-- EPITA -->
  <class name="epita" extends="default">
    <var name="LOGIN" value="foobar_m"/>
    <var name="TMPDIR" value="/goinfre/vampire_tests"/>
  </class>

  <!-- EPITA netbsd -->
    <class name="epita_netbsd" extends="epita">
      <md5cmd_local>echo "Not supported."</md5cmd_local>
      <md5cmd_remote>echo "Not supported."</md5cmd_remote>
      <var name="host_os" name="NetBSD 1.6"/>
      <var name="host_arch" name="i686"/>
     <!-- Speed is almost the same for all netbsd hosts -->
      <var name="host_speed" value="50"/>

      <host name="r2d2" />
      <host name="kwisatz" />
      <...>
    </class>

  <!-- EPITA alpha, etc. -->

<!-- LRDE -->
<class name="lrde" extends="default">
</class>

<class name="lrde_linux" extends="lrde">
  <var name="host_os" value="Debian GNU/Linux" />
  <var name="host_arch" value="i386"/>
  
  <host name="sandrock">
    <var name="speed" value=100 />
    <!-- Sandrock is a bi-processor -->
    <var name="max_processes" value=2 />
  </host>

  <!-- .... -->
</class>
==============================================================================

This file is really generic toward the tarball to be tested.

Now, here is a fictive example of tarballs.xml example:

==============================================================================
  <!-- Default autotools tarball -->
  <class name="autotools">

    <var name="CONFIGURE_FLAGS" value=""/>

    <build_commands>
      <!-- Upload -->
      <command name="upload" type="special" />

      <!-- Command run on the remote host -->
      <command name="gunzip -c %(TARBALLNAME)s | tar xvf -" type="remote">
        <onfail command="print" arg="gunzip failed"/>
      </command>
      <command name="cd %(DIRNAME)s" type="remote">
        <onfail command="print" arg="change dir failed"/>
      </command>
      <command name="./configure %(CONFIGURE_FLAGS)s" type="remote">
        <onfail command="print" arg="Configuration failed"/>
        <onfail command="fetch" arg="config.log"/>
      </command>
      <command name="make" type="remote">
        <onfail command="print" arg="Make failed"/>
      </command>
      <command name="make check" type="remote">
        <onfail command="print" arg="Make failed"/>
      </command>
      <command name="rm -rf %(TMPDIR)s" type="remote">
        <onfail command="print" arg="rm failed"/>
      </command>

    </build_commands>
  </class>

  <hostlist name="linux_g++3.2">
     <host name="sandrock" />
     <host name="rio" />
  </hostlist>

  <hostlist name="swig1.3">
    <host name="sandrock" />
    <host name="fidji" />
  </hostlist>

  <!-- Other hostlists ... -->

  <!-- Olena -->
  <class name="olena" extends="autotools">
    <set_of_parameters name="default">
      <hostlist>
        <!-- Try default parameters on one machine for each class below. -->
         <include_hostlist name="linux" />
         <include_hostlist name="netbsd" />
         <include_hostlist name="openbsd" />
      </hostlist>
   </set_of_parameters>

    <set_of_parameters name="g++3.2">
      <var name="CXX" value="g++3-2" />
      <hostlist>
        <!-- One machine among the list below is enough. -->
        <union>
          <hostlist name="linux_g++3.2" />
          <host name="a_good_host_to_test_these_parameters" />
          <hostlist name="freebsd_g++3.2 />
        </union>

        <!-- But always try an openbsd box with g++3.2. -->
        <include_hostlist name="openbsd_g++3.2" />
      </hostlist>
    </set_of_parameters>
  
   <set_of_parameters name="g++3.2 and swig 1.3">
     <var name="CXX" value="g++-3.2" />
     <var name="CONFIGURE_FLAGS" action="append" value="--with-swig-stuff" />
    
     <hostlist>
       <!-- Only accepts hosts which are listed in all the classes below. -->
       <include_intersection>
         <hostlist name="g++3.2" />
         <hostlist name="swig1.3" />
       </include_intersection>
     </hostlist>
  </class>
==============================================================================

This is a first draft, tags names are not well chosen, tags to implement are 
subject to discussion. A lot of other interesting features may be 
implemented, but we don't need many in a first time. I just wanted to share 
my ideas (and store them somewhere :-)) and open a discussion.

The advantage of this kind of configuration scheme is that the host part can 
be distributed easily, and the tarball part is very flexible. With includes, 
we could also share some part of the tarball configuration file, for example 
for all the C++ projects. 

Let me know what you think about it, I still don't have any really fixed idea 
neither the time to implement them for now, so it's a good period for 
discussing :)

Anyway, thanks for your work!




reply via email to

[Prev in Thread] Current Thread [Next in Thread]