qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Improving QMP test coverage


From: Markus Armbruster
Subject: Re: [Qemu-devel] Improving QMP test coverage
Date: Mon, 24 Jul 2017 08:56:50 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/25.2 (gnu/linux)

Cleber Rosa <address@hidden> writes:

> On 07/21/2017 11:33 AM, Stefan Hajnoczi wrote:
>>> Output testing style delegates checking ouput to diff.  I rather like it
>>> when text output is readily available.  It is when testing QMP.  A
>>> non-trivial example using this style could be useful, as discussing
>>> ideas tends to be more productive when they come with patches.
>> 
>> Yes, I was considering how many of the Python iotests could be rewritten
>> comfortably in shell.  It is nice when the test simply executes commands
>> and the output file shows the entire history of what happened.  Great
>> for debugging.
>> 
>> Stefan
>> 
> I'd like to have a better understanding of the major pain points here.
>
> Although this can be seen as a matter of taste, style preferences and
> even religion, I guess it's safe to say that Python can scale better
> than shell.  The upside of shell based tests is the "automatic" and
> complete logging, right?  Running "bash -x /path/to/test.sh" will give
> much more *useful* information than "python -v /path/to/test.py" will, fact.
>
> I believe this has to do with how *generic* Python code is written, and
> how builtin functions and most of the standard Python libraries work as
> they do.  Now, when writing code aimed at testing, making use of testing
> oriented libraries and tools, one would expect much more useful and
> readily available debug information.
>
> I'm biased, for sure, but that's what you get when you write basic tests
> using the Avocado libraries.  For instance, when using process.run()[1]
> within a test, you can choose to see its command output quite easily
> with a command such as "avocado --show=avocado.test.stdout run test.py".
>
> Using other custom logging channels is also trivial (for instance for
> specific QMP communication)[2][3].
>
> I wonder if such logging capabilities fill in the gap of what you
> describe as "[when the] output file shows the entire history of what
> happened".

Test code language is orthogonal to verification method (with code
vs. with diff).  Except verifying with shell code would be obviously
nuts[*].

The existing iotests written in Python verify with code, and the ones
written in shell verify with diff.  Doesn't mean that we have to port
from Python to shell to gain "verify with diff".

I don't doubt that featureful test frameworks like Avocado provide
adequate tools to debug tests.  The lure of the shell is its perceived
simplicity: everybody knows (well, should know) how to write and debug
simple shell scripts.  Of course, the simplicity evaporates when the
scripts grow beyond "simple".  Scare quotes, because what's simple for
Alice may not be so simple for Bob.

> BTW, I'll defer the discussion of using an external tool to check the
> output and determine test success/failure, because it is IMO a
> complementary topic, and I believe I understand its use cases.

Yes.  Regardless, I want to tell you *now* how tired of writing code to
verify test output I am.  Even of reading it.  Output text is easier to
read than code that tries to verify output text.  Diffs are easier to
read than stack backtraces.

> [1] -
> http://avocado-framework.readthedocs.io/en/52.0/api/utils/avocado.utils.html#avocado.utils.process.run
> [2] -
> http://avocado-framework.readthedocs.io/en/52.0/WritingTests.html#advanced-logging-capabilities
> [3] - https://www.youtube.com/watch?v=htUbOsh8MZI

[*] Very many nutty things are more obviously nuts in shell.  It's an
advantage of sorts ;)



reply via email to

[Prev in Thread] Current Thread [Next in Thread]