qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] gtester questions/issues


From: Michael Roth
Subject: Re: [Qemu-devel] gtester questions/issues
Date: Thu, 09 Jun 2011 18:04:44 -0500
User-agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.9.2.17) Gecko/20110414 Thunderbird/3.1.10

On 06/09/2011 03:02 PM, Luiz Capitulino wrote:
On Thu, 09 Jun 2011 14:04:37 -0500
Anthony Liguori<address@hidden>  wrote:

On 06/09/2011 01:47 PM, Luiz Capitulino wrote:

I've started writing some tests with the glib test framework (used by the qapi
patches) but am facing some issues that doesn't seem to exist with check (our
current framework).

Of course that it's possible that I'm missing something, in this case pointers
are welcome, but I must admit that my first impression wasn't positive.

1. Catching test abortion

By default check runs each test on a separate process, this way it's able to
catch any kind of abortion (such as an invalid pointer deference) and it
prints a very developer friendly message:

   Running suite(s): Memory module test suite
   0%: Checks: 1, Failures: 0, Errors: 1
   check-memory.c:20:E:Memory API:test_read_write_byte_simple:33: (after this 
point) Received signal 11 (Segmentation fault)

The glib suite doesn't seem to do that, at least not by default, so this is
what you get on an invalid pointer:

   ~/src/qmp-unstable/build (qapi-review)/ ./test-visiter2
   /qapi/visitor/input/int: Segmentation fault (core dumped)
   ~/src/qmp-unstable/build (qapi-review)/

Is it possible to have check's functionality someway? I read about the
g_test_trap_fork() function, but one would have to use it manually in
each test case, this is a no-no.

I think this is a personal preference thing.  I think having fork() be
optional is great because it makes it easier to use common state for
multiple test cases.

Coupling test-cases like this is almost always a bad thing. Test-cases have
to be independent from each other so that they can be run and debugged
individually, also a failing test won't bring the whole suite down, as this
makes a failing report useless.

That said, you can still do this sharing without sacrificing essential features.
Like disabling the fork mode altogether or subdividing test cases.

Anyway, If there's a non-ultra cumbersome way to use g_test_trap_fork() (or any
other workaround) to catch segfaults and abortions, then fine. Otherwise I
consider this a blocker, as any code we're going to test in qemu can possibly
crash. This is really a very basic feature that a C unit-test framework can
offer.


You kind of get the desired behavior if you run the test via something like:

gtester -k -o test.xml test-visiter

The gtester utility will log the return code after a test bombs, then restart and skip to the test following the one that bombed. And I'm sure gtester-report can process the resulting test.xml in manner similar to check... unfortunately it appears to be broken for me on Ubuntu 10.04 so here's the raw XML dump for reference:

<?xml version="1.0"?>
<gtester>
  <testbinary path="./test-visiter">
    <binary file="./test-visiter"/>
    <random-seed>R02S13c4d9e6d35c23e8dd988917863a66b1</random-seed>
    <testcase path="/0.15/visiter_core">
      <duration>0.000346</duration>
      <status exit-status="0" n-forks="0" result="success"/>
    </testcase>
    <testcase path="/0.15/epic_fail">
      <duration>0.000000</duration>
      <status exit-status="-256" n-forks="0" result="failed"/>
    </testcase>
    <duration>0.015056</duration>
  </testbinary>
  <testbinary path="./test-visiter">
    <binary file="./test-visiter"/>
    <random-seed>R02S7acda18e321c5a41ccaee4f524877343</random-seed>
    <testcase path="/0.15/visiter_core" skipped="1"/>
    <testcase path="/0.15/epic_fail" skipped="1"/>
    <testcase path="/0.15/epic_fail2">

<error>ERROR:/home/mdroth/w/qemu2.git/test-visiter.c:312:test_epic_fail2: assertion failed: (false)</error>
      <duration>0.000000</duration>
      <status exit-status="-256" n-forks="0" result="failed"/>
    </testcase>
    <duration>0.006355</duration>
  </testbinary>
  <testbinary path="./test-visiter">
    <binary file="./test-visiter"/>
    <random-seed>R02S73a208dd8f1b127c23b6a7883df9b78f</random-seed>
    <testcase path="/0.15/visiter_core" skipped="1"/>
    <testcase path="/0.15/epic_fail" skipped="1"/>
    <testcase path="/0.15/epic_fail2" skipped="1"/>
    <testcase path="/0.15/nested_structs">
      <duration>0.000318</duration>
      <status exit-status="0" n-forks="0" result="success"/>
    </testcase>
    <testcase path="/0.15/enums">
      <duration>0.000036</duration>
      <status exit-status="0" n-forks="0" result="success"/>
    </testcase>
    <testcase path="/0.15/nested_enums">
      <duration>0.000059</duration>
      <status exit-status="0" n-forks="0" result="success"/>
    </testcase>
    <duration>0.008079</duration>
  </testbinary>
</gtester>

XML or HTML...it's not pretty, but we can make use of it for automated tests. And for interactive use I don't think it's as much a problem since that'll for the most part be developers making sure they didn't break any tests before committing, or working on failures picked up by automated runs: not a big deal in those cases if the unit tests stop at the first abort.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]