qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 6/6] iotest 201: new test for qmp nbd-server-


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [Qemu-devel] [PATCH v2 6/6] iotest 201: new test for qmp nbd-server-remove
Date: Mon, 15 Jan 2018 21:28:21 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.5.2

15.01.2018 18:05, Eric Blake wrote:
On 01/15/2018 08:40 AM, Vladimir Sementsov-Ogievskiy wrote:

And the list archives show several threads of people complaining that
./check failing with a diff that merely shows:

-.....
+..E..
didn't see that. usually, for failed iotests I see

-.....
+..E..

+ some kind of assert-fail in one of testcases
Although deciphering the assert-fail is not always trivial, and it is
still sorely underdocumented on how to manually reproduce the situation
that got to the stackdump.

hmm, restart test? is it documented for 194?

I don't see an option to run only one testcase, but unittest supports it,
so we just need to add this option to ./check


Is it really not trivial with my test?

example, inject bug:
--- a/tests/qemu-iotests/201
+++ b/tests/qemu-iotests/201
@@ -76,7 +76,7 @@ class TestNbdServerRemove(iotests.QMPTestCase):
         self.assertEqual(filter_qemu_io(qemu_io_output).strip(),
                          "can't open device nbd+tcp://localhost:10900/exp: " +                           "Requested export not available\nserver reported: " +
-                         "export 'exp' not present")
+                         "1export 'exp' not present")

     def do_test_connect_after_remove(self, force=None):
         args = ('-r', '-f', 'raw', '-c', 'read 0 512', nbd_uri)



output:

-.......
+FFF....
+======================================================================
+FAIL: test_connect_after_remove_default (__main__.TestNbdServerRemove)
+----------------------------------------------------------------------
+Traceback (most recent call last):
+  File "201", line 89, in test_connect_after_remove_default
+    self.do_test_connect_after_remove()
+  File "201", line 86, in do_test_connect_after_remove
+    self.assertConnectFailed(qemu_io(*args))
+  File "201", line 79, in assertConnectFailed
+    "1export 'exp' not present")
+AssertionError: "can't open device nbd+tcp://localhost:10900/exp: Requested export not available\nserver reported: export 'exp' not present" != "can't open device nbd+tcp://localhost:10900/exp: Requested export not available\nserver reported: 1export 'exp' not present"
+
[...]


- all obvious. We see what happened and where. And the name of the broken testcase.


----
I remember the following problems with iotests, but I do not think that this is a reason to deprecate unittest and go in some custom way. Better is to fix them. It's all are problems of our unittest wrapping, not of python unittest.

- asserts and prints are not naturally mixed in final output (isn't it a reason for never using print in these tests?)
- no progress, to see the output we should wait until test finished
- if qemu crashed, it is hard to understand, in which testcase

so, my point is: use unittest. It is a standard library and common way of doing this. And it is already in Qemu iotests.
It gives good organization of test code.

Maybe, a "plain executable python test" is good for complicated tests, which are actually cant be called "unit test", but which are more like "system wide test", when we actually need only one testcase, but it needs several pages of code.


so we know in which testcase and in which line it was failed.

makes it rather hard to see WHAT test 2 was doing that caused an error
instead of a pass, let alone set up a reproduction scenario on JUST the
failing test.  Yes, a lot of existing iotests use this unittest layout,
and on that grounds, I'm not opposed to adding another one; but test 194
really IS easier to debug when something goes wrong.

And there 3 test cases, sharing same setUp. Do not you say that unittest
becomes
deprecated in qemu? I think, if we have only one testcase, we may use
194-like approach,
but if we have more, it's better to use unittest.
Yes, I think a nice goal for improved testing is to write more
python-based iotests in the style that uses actual output, and not just
the unittest framework, in the test log.  It's not a hard requirement as
long as no one has converted existing tests, but is food for thought.

I think, it doesn't mean that we should not use unittest at all, we just
need more output with
it.
Yes, that's also a potentially viable option.



--
Best regards,
Vladimir




reply via email to

[Prev in Thread] Current Thread [Next in Thread]