qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 30/31] tests/functional: skip tests if assets are not avai


From: Thomas Huth
Subject: Re: [PATCH v2 30/31] tests/functional: skip tests if assets are not available
Date: Fri, 13 Dec 2024 10:10:54 +0100
User-agent: Mozilla Thunderbird

On 12/12/2024 16.02, Daniel P. Berrangé wrote:
On Thu, Dec 12, 2024 at 03:14:53PM +0100, Thomas Huth wrote:
On 11/12/2024 18.26, Daniel P. Berrangé wrote:
If downloading of assets has been disabled, then skip running a
test if the assets it has registered are not already downloaded.

Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
---
   tests/functional/qemu_test/asset.py    |  8 +++++++-
   tests/functional/qemu_test/testcase.py | 11 +++++++++++
   2 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/tests/functional/qemu_test/asset.py 
b/tests/functional/qemu_test/asset.py
index c5d3e73c4b..39832b2587 100644
--- a/tests/functional/qemu_test/asset.py
+++ b/tests/functional/qemu_test/asset.py
@@ -65,6 +65,12 @@ def _check(self, cache_file):
       def valid(self):
           return self.cache_file.exists() and self._check(self.cache_file)
+    def fetchable(self):
+        return not os.environ.get("QEMU_TEST_NO_DOWNLOAD", False)
+
+    def available(self):
+        return self.valid() or self.fetchable()
+
       def _wait_for_other_download(self, tmp_cache_file):
           # Another thread already seems to download the asset, so wait until
           # it is done, while also checking the size to see whether it is stuck
@@ -103,7 +109,7 @@ def fetch(self):
                              self.cache_file, self.url)
               return str(self.cache_file)
-        if os.environ.get("QEMU_TEST_NO_DOWNLOAD", False):
+        if not self.fetchable():
               raise Exception("Asset cache is invalid and downloads disabled")
           self.log.info("Downloading %s to %s...", self.url, self.cache_file)
diff --git a/tests/functional/qemu_test/testcase.py 
b/tests/functional/qemu_test/testcase.py
index 7bece8738a..6c67a9459c 100644
--- a/tests/functional/qemu_test/testcase.py
+++ b/tests/functional/qemu_test/testcase.py
@@ -184,6 +184,14 @@ def scratch_file(self, *args):
       def log_file(self, *args):
           return str(Path(self.outputdir, *args))
+    def assets_available(self):
+        for name, asset in vars(self.__class__).items():
+            if name.startswith("ASSET_") and type(asset) == Asset:
+                if not asset.available():
+                    self.log.debug(f"Asset {asset.url} not available")
+                    return False
+        return True
+
       def setUp(self, bin_prefix):
           self.assertIsNotNone(self.qemu_bin, 'QEMU_TEST_QEMU_BINARY must be 
set')
           self.arch = self.qemu_bin.split('-')[-1]
@@ -209,6 +217,9 @@ def setUp(self, bin_prefix):
           self.machinelog.setLevel(logging.DEBUG)
           self.machinelog.addHandler(self._log_fh)
+        if not self.assets_available():
+            self.skipTest('One or more assets is not available')

So if a test_*.py file consists of multiple subtests, this will now skip all
of them if just the asset of one subtest is missing?

Yep, I kept it simple. Often multiple assets all come from the same
server (eg kernel + initrd), and the same assets are used across
multiple tests.

Could we maybe handle this test skipping in the new archive_extract() and
uncompress() functions instead, so that only the related subtests will be
skipped? (We still might need another wrapper function in testcase for the
spots that still call .fetch() on the assets directly, though)

I'm not sure its worth the effort to ensure we don't leave gaves in places
that need skipping.

We still intend that this skipping scenario is highly undesirable at all,
and want to try to ensure it never actually triggers. ie we want cache
working in GitLab CI, so that we almost never need to download anything.

Most likely place to see skips is for developers locally if they're
runing tests for the first time, or haven't done it for a long while.

Ok, fair point. Maybe it's even better to skip the whole test so that the whole test shows up as "SKIP" in the summary, otherwise we won't notice in the test summary if just one single subtest is skipped. So after thinking about this for a while, this is maybe even the better approach indeed.

Thus:
Reviewed-by: Thomas Huth <thuth@redhat.com>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]