qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [Qemu-devel] [PATCH 0/3] build configuration query tool


From: Cleber Rosa
Subject: Re: [Qemu-block] [Qemu-devel] [PATCH 0/3] build configuration query tool and conditional (qemu-io)test skip
Date: Tue, 25 Jul 2017 12:47:11 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.1


On 07/25/2017 12:24 PM, Daniel P. Berrange wrote:
>>
>> OK, let's abstract a bit more.  Let's take this part of your statement:
>>
>>  "if qemu-io in this environment cannot do aio=native"
>>
>> Let's call that a feature check.  Depending on how the *feature check*
>> is written, a negative result may hide a test failure, because it would
>> now be skipped.
>>
>> Suppose that a feature check for "SDL display" is such that you run
>> "qemu -display sdl".  A *feature failure* here (SDL init is broken), or
>> an environment issue (DISPLAY=), will cause a SDL test skip.
> 
> You could have a way to statically define what features any given run
> of the test suite should enable, then report failure if they were not
> detected.
> 

You hit a key point here: statically define(d).  As a said before,
feature statements are a safer place upon which to base tests.  Ad hoc
checks, as suggested by Stefan, are definitely not.

> This is a similar situation to that seen with configure scripts. If invoked
> with no --enable-xxx flags, it will probe for features & enable them if
> found.  This means you can accidentally build without expected features if
> you have a missing -devel package, or a header/library is broken in some
> way. This is why configure prints a summary of which features it actually
> found. It is also why when building binary packages like RPMs, it is common
> to explicitly give --enable-xxx flags for all features you expect to see.
> Automatic enablement is none the less still useful for people in general.
> 
> So if we applied this kind of approach for testing, then any automated
> test systems at least, ought to provide a fixed list of features they
> expect to be present for tests. So if any features accidentally broke
> the tests would error.
> 
> Regards,
> Daniel
> 

Right.  The key question here seems to be the distance of the "fixed
list of features" from the test itself.  For instance, think of this
workflow/approach:

 1) ./scripts/configured-features-to-feature-list.sh > ~/feature_list
 2) tweak feature_list
 3) ./rpm -e SDL-devel
 4) ./configure  --enable-sdl
 5) ./make
 6) ./scripts/run-test-suite.sh --only-features=~/feature_list

This would only run tests that are expected to PASS within the given
feature list.  The test runner (run-test-suite.sh) would select only
tests that match the features given.  No SKIPs would be expected as the
outcome of *any test*.

The other approach is to let the feature match to the test, and SKIPs
would then be OK.  The downside to this is that a "--enable-xxx" with
missing "-devel" package, as you exemplified, would not show up as ERRORs.

Makes sense?

Regards!

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]
[  7ABB 96EB 8B46 B94D 5E0F  E9BB 657E 8D33 A5F2 09F3  ]

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]