|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14 |
|
| #
14e594a1 |
| 19-Mar-2025 |
Rae Moar <[email protected]> |
kunit: tool: fix count of tests if late test plan
Fix test count with late test plan.
For example, TAP version 13 ok 1 test1 1..4
Returns a count of 1 passed, 1 crashed (because it expects t
kunit: tool: fix count of tests if late test plan
Fix test count with late test plan.
For example, TAP version 13 ok 1 test1 1..4
Returns a count of 1 passed, 1 crashed (because it expects tests after the test plan): returning the total count of 2 tests
Change this to be 1 passed, 1 error: total count of 1 test
Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Rae Moar <[email protected]> Reviewed-by: David Gow <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
show more ...
|
|
Revision tags: v6.14-rc7 |
|
| #
1d4c06d5 |
| 13-Mar-2025 |
Rae Moar <[email protected]> |
kunit: tool: Fix bug in parsing test plan
A bug was identified where the KTAP below caused an infinite loop:
TAP version 13 ok 4 test_case 1..4
The infinite loop was caused by the parser not pa
kunit: tool: Fix bug in parsing test plan
A bug was identified where the KTAP below caused an infinite loop:
TAP version 13 ok 4 test_case 1..4
The infinite loop was caused by the parser not parsing a test plan if following a test result line.
Fix this bug by parsing test plan line to avoid the infinite loop.
Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Rae Moar <[email protected]> Reviewed-by: David Gow <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
show more ...
|
|
Revision tags: v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12 |
|
| #
3c67a2c0 |
| 13-Nov-2024 |
Rae Moar <[email protected]> |
kunit: tool: print failed tests only
Add flag --failed to kunit.py to print only failed tests. This printing is done after running is over.
This patch also adds the method print_test() that will al
kunit: tool: print failed tests only
Add flag --failed to kunit.py to print only failed tests. This printing is done after running is over.
This patch also adds the method print_test() that will also print your Test object. Before, all printing of tests occurred during parsing. This method could be useful in the future when converting to/from KTAP to this pretty-print output.
Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Rae Moar <[email protected]> Reviewed-by: David Gow <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
show more ...
|
| #
062a9dd9 |
| 13-Nov-2024 |
David Gow <[email protected]> |
kunit: tool: Only print the summary
Allow only printing the summary at the end of a test run, rather than all individual test results. This summary will list a few failing tests if there are any.
T
kunit: tool: Only print the summary
Allow only printing the summary at the end of a test run, rather than all individual test results. This summary will list a few failing tests if there are any.
To use:
./tools/testing/kunit/kunit.py run --summary
Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Rae Moar <[email protected]> Signed-off-by: David Gow <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
show more ...
|
|
Revision tags: v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5 |
|
| #
8ae27bc7 |
| 07-Dec-2023 |
Rae Moar <[email protected]> |
kunit: tool: fix parsing of test attributes
Add parsing of attributes as diagnostic data. Fixes issue with test plan being parsed incorrectly as diagnostic data when located after suite-level attrib
kunit: tool: fix parsing of test attributes
Add parsing of attributes as diagnostic data. Fixes issue with test plan being parsed incorrectly as diagnostic data when located after suite-level attributes.
Note that if there does not exist a test plan line, the diagnostic lines between the suite header and the first result will be saved in the suite log rather than the first test case log.
Signed-off-by: Rae Moar <[email protected]> Reviewed-by: David Gow <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
show more ...
|
|
Revision tags: v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4 |
|
| #
723c8258 |
| 25-Jul-2023 |
Rae Moar <[email protected]> |
kunit: tool: Add command line interface to filter and report attributes
Add ability to kunit.py to filter attributes and report a list of tests including attributes without running tests.
Add flag
kunit: tool: Add command line interface to filter and report attributes
Add ability to kunit.py to filter attributes and report a list of tests including attributes without running tests.
Add flag "--filter" to input filters on test attributes. Tests will be filtered out if they do not match all inputted filters.
Example: --filter speed=slow (This filter would run only the tests that are marked as slow)
Filters have operations: <, >, <=, >=, !=, and =. But note that the characters < and > are often interpreted by the shell, so they may need to be quoted or escaped.
Example: --filter "speed>slow" or --filter speed\>slow (This filter would run only the tests that have the speed faster than slow.
Additionally, multiple filters can be used.
Example: --filter "speed=slow, module!=example" (This filter would run only the tests that have the speed slow and are not in the "example" module)
Note if the user wants to skip filtered tests instead of not running/showing them use the "--filter_action=skip" flag instead.
Expose the output of kunit.action=list option with flag "--list_tests" to output a list of tests. Additionally, add flag "--list_tests_attr" to output a list of tests and their attributes. These flags are useful to see tests and test attributes without needing to run tests.
Example of the output of "--list_tests_attr": example example.test_1 example.test_2 # example.test_2.speed: slow
This output includes a suite, example, with two test cases, test_1 and test_2. And in this instance test_2 has been marked as slow.
Reviewed-by: David Gow <[email protected]> Signed-off-by: Rae Moar <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
show more ...
|
|
Revision tags: v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3 |
|
| #
126901ba |
| 16-Mar-2023 |
Daniel Latypov <[email protected]> |
kunit: tool: remove unused imports and variables
We don't run a linter regularly over kunit.py code (the default settings on most don't like kernel style, e.g. tabs) so some of these imports didn't
kunit: tool: remove unused imports and variables
We don't run a linter regularly over kunit.py code (the default settings on most don't like kernel style, e.g. tabs) so some of these imports didn't get removed when they stopped being used.
Signed-off-by: Daniel Latypov <[email protected]> Reviewed-by: David Gow <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
show more ...
|
|
Revision tags: v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8 |
|
| #
c2bb92bc |
| 30-Nov-2022 |
Daniel Latypov <[email protected]> |
kunit: tool: make parser preserve whitespace when printing test log
Currently, kunit_parser.py is stripping all leading whitespace to make parsing easier. But this means we can't accurately show ker
kunit: tool: make parser preserve whitespace when printing test log
Currently, kunit_parser.py is stripping all leading whitespace to make parsing easier. But this means we can't accurately show kernel output for failing tests or when the kernel crashes.
Embarassingly, this affects even KUnit's own output, e.g. [13:40:46] Expected 2 + 1 == 2, but [13:40:46] 2 + 1 == 3 (0x3) [13:40:46] not ok 1 example_simple_test [13:40:46] [FAILED] example_simple_test
After this change, here's what the output in context would look like [13:40:46] =================== example (4 subtests) =================== [13:40:46] # example_simple_test: initializing [13:40:46] # example_simple_test: EXPECTATION FAILED at lib/kunit/kunit-example-test.c:29 [13:40:46] Expected 2 + 1 == 2, but [13:40:46] 2 + 1 == 3 (0x3) [13:40:46] [FAILED] example_simple_test [13:40:46] [SKIPPED] example_skip_test [13:40:46] [SKIPPED] example_mark_skipped_test [13:40:46] [PASSED] example_all_expect_macros_test [13:40:46] # example: initializing suite [13:40:46] # example: pass:1 fail:1 skip:2 total:4 [13:40:46] # Totals: pass:1 fail:1 skip:2 total:4 [13:40:46] ===================== [FAILED] example =====================
This example shows one minor cosmetic defect this approach has. The test counts lines prevent us from dedenting the suite-level output. But at the same time, any form of non-KUnit output would do the same unless it happened to be indented as well.
Signed-off-by: Daniel Latypov <[email protected]> Reviewed-by: David Gow <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
show more ...
|
| #
5937e0c0 |
| 29-Nov-2022 |
Daniel Latypov <[email protected]> |
kunit: tool: don't include KTAP headers and the like in the test log
We print the "test log" on failure. This is meant to be all the kernel output that happened during the test.
But we also include
kunit: tool: don't include KTAP headers and the like in the test log
We print the "test log" on failure. This is meant to be all the kernel output that happened during the test.
But we also include the special KTAP lines in it, which are often redundant.
E.g. we include the "not ok" line in the log, right before we print that the test case failed... [13:51:48] Expected 2 + 1 == 2, but [13:51:48] 2 + 1 == 3 (0x3) [13:51:48] not ok 1 example_simple_test [13:51:48] [FAILED] example_simple_test
More full example after this patch: [13:51:48] =================== example (4 subtests) =================== [13:51:48] # example_simple_test: initializing [13:51:48] # example_simple_test: EXPECTATION FAILED at lib/kunit/kunit-example-test.c:29 [13:51:48] Expected 2 + 1 == 2, but [13:51:48] 2 + 1 == 3 (0x3) [13:51:48] [FAILED] example_simple_test
Signed-off-by: Daniel Latypov <[email protected]> Reviewed-by: David Gow <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
show more ...
|
|
Revision tags: v6.1-rc7 |
|
| #
434498a6 |
| 23-Nov-2022 |
Rae Moar <[email protected]> |
kunit: tool: parse KTAP compliant test output
Change the KUnit parser to be able to parse test output that complies with the KTAP version 1 specification format found here: https://kernel.org/doc/ht
kunit: tool: parse KTAP compliant test output
Change the KUnit parser to be able to parse test output that complies with the KTAP version 1 specification format found here: https://kernel.org/doc/html/latest/dev-tools/ktap.html. Ensure the parser is able to parse tests with the original KUnit test output format as well.
KUnit parser now accepts any of the following test output formats:
Original KUnit test output format:
TAP version 14 1..1 # Subtest: kunit-test-suite 1..3 ok 1 - kunit_test_1 ok 2 - kunit_test_2 ok 3 - kunit_test_3 # kunit-test-suite: pass:3 fail:0 skip:0 total:3 # Totals: pass:3 fail:0 skip:0 total:3 ok 1 - kunit-test-suite
KTAP version 1 test output format:
KTAP version 1 1..1 KTAP version 1 1..3 ok 1 kunit_test_1 ok 2 kunit_test_2 ok 3 kunit_test_3 ok 1 kunit-test-suite
New KUnit test output format (changes made in the next patch of this series):
KTAP version 1 1..1 KTAP version 1 # Subtest: kunit-test-suite 1..3 ok 1 kunit_test_1 ok 2 kunit_test_2 ok 3 kunit_test_3 # kunit-test-suite: pass:3 fail:0 skip:0 total:3 # Totals: pass:3 fail:0 skip:0 total:3 ok 1 kunit-test-suite
Signed-off-by: Rae Moar <[email protected]> Reviewed-by: Daniel Latypov <[email protected]> Reviewed-by: David Gow <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
show more ...
|
|
Revision tags: v6.1-rc6, v6.1-rc5 |
|
| #
0a7d5c30 |
| 11-Nov-2022 |
Daniel Latypov <[email protected]> |
kunit: tool: tweak error message when no KTAP found
We currently tell people we "couldn't find any KTAP output" with no indication as to what this might mean.
After this patch, we get:
$ ./tools/t
kunit: tool: tweak error message when no KTAP found
We currently tell people we "couldn't find any KTAP output" with no indication as to what this might mean.
After this patch, we get:
$ ./tools/testing/kunit/kunit.py parse /dev/null ============================================================ [ERROR] Test: <missing>: Could not find any KTAP output. Did any KUnit tests run? ============================================================ Testing complete. Ran 0 tests: errors: 1
Note: we could try and generate a more verbose message like > Please check .kunit/test.log to see the raw kernel output. or the like, but we'd need to know what the build dir was to know where test.log actually lives.
This patch tries to make a more minimal improvement.
Signed-off-by: Daniel Latypov <[email protected]> Reviewed-by: David Gow <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
show more ...
|
|
Revision tags: v6.1-rc4 |
|
| #
f473dd94 |
| 03-Nov-2022 |
Daniel Latypov <[email protected]> |
kunit: tool: make TestCounts a dataclass
Since we're using Python 3.7+, we can use dataclasses to tersen the code.
It also lets us create pre-populated TestCounts() objects and compare them in our
kunit: tool: make TestCounts a dataclass
Since we're using Python 3.7+, we can use dataclasses to tersen the code.
It also lets us create pre-populated TestCounts() objects and compare them in our unit test. (Before, you could only create empty ones).
Signed-off-by: Daniel Latypov <[email protected]> Reviewed-by: David Gow <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
show more ...
|
|
Revision tags: v6.1-rc3 |
|
| #
f19dd011 |
| 28-Oct-2022 |
Daniel Latypov <[email protected]> |
kunit: tool: print summary of failed tests if a few failed out of a lot
E.g. all the hw_breakpoint tests are failing right now. So if I run `kunit.py run --altests --arch=x86_64`, then I see > Testi
kunit: tool: print summary of failed tests if a few failed out of a lot
E.g. all the hw_breakpoint tests are failing right now. So if I run `kunit.py run --altests --arch=x86_64`, then I see > Testing complete. Ran 408 tests: passed: 392, failed: 9, skipped: 7
Seeing which 9 tests failed out of the hundreds is annoying. If my terminal doesn't have scrollback support, I have to resort to looking at `.kunit/test.log` for the `not ok` lines.
Teach kunit.py to print a summarized list of failures if the # of tests reachs an arbitrary threshold (>=100 tests).
To try and keep the output from being too long/noisy, this new logic a) just reports "parent_test failed" if every child test failed b) won't print anything if there are >10 failures (also arbitrary).
With this patch, we get an extra line of output showing: > Testing complete. Ran 408 tests: passed: 392, failed: 9, skipped: 7 > Failures: hw_breakpoint
This also works with parameterized tests, e.g. if I add a fake failure > Failures: kcsan.test_atomic_builtins_missing_barrier.threads=6
Note: we didn't have enough tests for this to be a problem before. But with commit 980ac3ad0512 ("kunit: tool: rename all_test_uml.config, use it for --alltests"), --alltests works and thus running >100 tests will probably become more common.
Signed-off-by: Daniel Latypov <[email protected]> Reviewed-by: David Gow <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
show more ...
|
|
Revision tags: v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1 |
|
| #
a15cfa39 |
| 10-Aug-2022 |
Daniel Latypov <[email protected]> |
kunit: tool: make --raw_output=kunit (aka --raw_output) preserve leading spaces
With $ kunit.py run --raw_output=all ... you get the raw output from the kernel, e.g. something like > TAP version 14
kunit: tool: make --raw_output=kunit (aka --raw_output) preserve leading spaces
With $ kunit.py run --raw_output=all ... you get the raw output from the kernel, e.g. something like > TAP version 14 > 1..26 > # Subtest: time_test_cases > 1..1 > ok 1 - time64_to_tm_test_date_range > ok 1 - time_test_cases
But --raw_output=kunit or equivalently --raw_output, you get > TAP version 14 > 1..26 > # Subtest: time_test_cases > 1..1 > ok 1 - time64_to_tm_test_date_range > ok 1 - time_test_cases
It looks less readable in my opinion, and it also isn't "raw output."
This is due to sharing code with kunit_parser.py, which wants to strip leading whitespace since it uses anchored regexes. We could update the kunit_parser.py code to tolerate leaading spaces, but this patch takes the easier way out and adds a bool flag.
Signed-off-by: Daniel Latypov <[email protected]> Reviewed-by: David Gow <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
show more ...
|
|
Revision tags: v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18 |
|
| #
e756dbeb |
| 16-May-2022 |
Daniel Latypov <[email protected]> |
kunit: tool: refactoring printing logic into kunit_printer.py
Context: * kunit_kernel.py is importing kunit_parser.py just to use the print_with_timestamp() function * the parser is directly print
kunit: tool: refactoring printing logic into kunit_printer.py
Context: * kunit_kernel.py is importing kunit_parser.py just to use the print_with_timestamp() function * the parser is directly printing to stdout, which will become an issue if we ever try to run multiple kernels in parallel
This patch introduces a kunit_printer.py file and migrates callers of kunit_parser.print_with_timestamp() to call kunit_printer.stdout.print_with_timestamp() instead.
Future changes: If we want to support showing results for parallel runs, we could then create new Printer's that don't directly write to stdout and refactor the code to pass around these Printer objects.
Signed-off-by: Daniel Latypov <[email protected]> Reviewed-by: David Gow <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
show more ...
|
|
Revision tags: v5.18-rc7 |
|
| #
0453f984 |
| 09-May-2022 |
Daniel Latypov <[email protected]> |
kunit: tool: misc cleanups
This primarily comes from running pylint over kunit tool code and ignoring some warnings we don't care about. If we ever got a fully clean setup, we could add this to run_
kunit: tool: misc cleanups
This primarily comes from running pylint over kunit tool code and ignoring some warnings we don't care about. If we ever got a fully clean setup, we could add this to run_checks.py, but we're not there yet.
Fix things like * Drop unused imports * check `is None`, not `== None` (see PEP 8) * remove redundant parens around returns * remove redundant `else` / convert `elif` to `if` where appropriate * rename make_arch_qemuconfig() param to base_kunitconfig (this is the name used in the subclass, and it's a better one) * kunit_tool_test: check the exit code for SystemExit (could be 0)
Signed-off-by: Daniel Latypov <[email protected]> Reviewed-by: David Gow <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
show more ...
|
| #
94507ee3 |
| 12-May-2022 |
Daniel Latypov <[email protected]> |
kunit: tool: minor cosmetic cleanups in kunit_parser.py
There should be no behavioral changes from this patch.
This patch removes redundant comment text, inlines a function used in only one place,
kunit: tool: minor cosmetic cleanups in kunit_parser.py
There should be no behavioral changes from this patch.
This patch removes redundant comment text, inlines a function used in only one place, and other such minor tweaks.
Signed-off-by: Daniel Latypov <[email protected]> Reviewed-by: David Gow <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
show more ...
|
| #
dbf0b0d5 |
| 12-May-2022 |
Daniel Latypov <[email protected]> |
kunit: tool: make parser stop overwriting status of suites w/ no_tests
Consider this invocation $ ./tools/testing/kunit/kunit.py parse <<EOF TAP version 14 1..2 ok 1 - suite # Subtest: no_
kunit: tool: make parser stop overwriting status of suites w/ no_tests
Consider this invocation $ ./tools/testing/kunit/kunit.py parse <<EOF TAP version 14 1..2 ok 1 - suite # Subtest: no_tests_suite # catastrophic error! not ok 1 - no_tests_suite EOF
It will have a 0 exit code even though there's a "not ok".
Consider this one: $ ./tools/testing/kunit/kunit.py parse <<EOF TAP version 14 1..2 ok 1 - suite not ok 1 - no_tests_suite EOF
It will a non-zero exit code.
Why? We have this line in the kunit_parser.py > parent_test = parse_test_header(lines, test) where we have special handling when we see "# Subtest" and we ignore the explicit reported "not ok 1" status!
Also, NO_TESTS at a suite-level only results in a non-zero status code where then there's only one suite atm.
This change is the minimal one to make sure we don't overwrite it.
Signed-off-by: Daniel Latypov <[email protected]> Reviewed-by: David Gow <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
show more ...
|
| #
33d4a933 |
| 12-May-2022 |
Daniel Latypov <[email protected]> |
kunit: tool: remove dead parse_crash_in_log() logic
This logic depends on the kernel logging a message containing 'kunit test case crashed', but there is no corresponding logic to do so.
This is li
kunit: tool: remove dead parse_crash_in_log() logic
This logic depends on the kernel logging a message containing 'kunit test case crashed', but there is no corresponding logic to do so.
This is likely a relic of the revision process KUnit initially went through when being upstreamed.
Delete it given 1) it's been missing for years and likely won't get implemented 2) the parser has been moving to be a more general KTAP parser, kunit-only magic like this isn't how we'd want to implement it.
Signed-off-by: Daniel Latypov <[email protected]> Reviewed-by: David Gow <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
show more ...
|
|
Revision tags: v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1 |
|
| #
9660209d |
| 29-Mar-2022 |
Daniel Latypov <[email protected]> |
kunit: tool: print clearer error message when there's no TAP output
Before: $ ./tools/testing/kunit/kunit.py parse /dev/null ... [ERROR] Test : invalid KTAP input!
After: $ ./tools/testing/kunit/ku
kunit: tool: print clearer error message when there's no TAP output
Before: $ ./tools/testing/kunit/kunit.py parse /dev/null ... [ERROR] Test : invalid KTAP input!
After: $ ./tools/testing/kunit/kunit.py parse /dev/null ... [ERROR] Test <missing>: could not find any KTAP output!
This error message gets printed out when extract_tap_output() yielded no lines. So while it could be because of malformed KTAP output from KUnit, it could also be due to not having any KTAP output at all.
Try and make the error message here more clear.
Signed-off-by: Daniel Latypov <[email protected]> Reviewed-by: David Gow <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
show more ...
|
| #
c2497643 |
| 08-Apr-2022 |
Daniel Latypov <[email protected]> |
kunit: tool: update test counts summary line format
Before: > Testing complete. Passed: 137, Failed: 0, Crashed: 0, Skipped: 36, Errors: 0
After: > Testing complete. Ran 173 tests: passed: 137, ski
kunit: tool: update test counts summary line format
Before: > Testing complete. Passed: 137, Failed: 0, Crashed: 0, Skipped: 36, Errors: 0
After: > Testing complete. Ran 173 tests: passed: 137, skipped: 36
Even with our current set of statuses, the output is a bit verbose. It could get worse in the future if we add more (e.g. timeout, kasan). Let's only print the relevant ones.
I had previously been sympathetic to the argument that always printing out all the statuses would make it easier to parse results. But now we have commit acd8e8407b8f ("kunit: Print test statistics on failure"), there are test counts printed out in the raw output. We don't currently print out an overall total across all suites, but it would be easy to add, if we see a need for that.
Signed-off-by: Daniel Latypov <[email protected]> Co-developed-by: David Gow <[email protected]> Signed-off-by: David Gow <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
show more ...
|
|
Revision tags: v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6 |
|
| #
d34f82d6 |
| 24-Feb-2022 |
Kees Cook <[email protected]> |
kunit: tool: Do not colorize output when redirected
Filling log files with color codes makes diffs and other comparisons difficult. Only emit vt100 codes when the stdout is a TTY.
Cc: Brendan Higgi
kunit: tool: Do not colorize output when redirected
Filling log files with color codes makes diffs and other comparisons difficult. Only emit vt100 codes when the stdout is a TTY.
Cc: Brendan Higgins <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Kees Cook <[email protected]> Reviewed-by: David Gow <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
show more ...
|
|
Revision tags: v5.17-rc5, v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6 |
|
| #
85310a62 |
| 15-Dec-2021 |
Daniel Latypov <[email protected]> |
kunit: tool: fix newly introduced typechecker errors
After upgrading mypy and pytype from pip, we see 2 new errors when running ./tools/testing/kunit/run_checks.py.
Error #1: mypy and pytype They n
kunit: tool: fix newly introduced typechecker errors
After upgrading mypy and pytype from pip, we see 2 new errors when running ./tools/testing/kunit/run_checks.py.
Error #1: mypy and pytype They now deduce that importlib.util.spec_from_file_location() can return None and note that we're not checking for this.
We validate that the arch is valid (i.e. the file exists) beforehand. Add in an `asssert spec is not None` to appease the checkers.
Error #2: pytype bug https://github.com/google/pytype/issues/1057 It doesn't like `from datetime import datetime`, specifically that a type shares a name with a module.
We can workaround this by either * renaming the import or just using `import datetime` * passing the new `--fix-module-collisions` flag to pytype.
We pick the first option for now because * the flag is quite new, only in the 2021.11.29 release. * I'd prefer if people can just run `pytype <file>`
Signed-off-by: Daniel Latypov <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
show more ...
|
| #
e0cc8c05 |
| 14-Dec-2021 |
Daniel Latypov <[email protected]> |
kunit: tool: delete kunit_parser.TestResult type
The `log` field is unused, and the `status` field is accessible via `test.status`.
So it's simpler to just return the main `Test` object directly.
kunit: tool: delete kunit_parser.TestResult type
The `log` field is unused, and the `status` field is accessible via `test.status`.
So it's simpler to just return the main `Test` object directly.
And since we're no longer returning a namedtuple, which has no type annotations, this hopefully means typecheckers are better equipped to find any errors.
Signed-off-by: Daniel Latypov <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
show more ...
|
|
Revision tags: v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1, v5.15, v5.15-rc7, v5.15-rc6, v5.15-rc5 |
|
| #
142189f0 |
| 07-Oct-2021 |
Daniel Latypov <[email protected]> |
kunit: tool: print parsed test results fully incrementally
With the parser rework [1] and run_kernel() rework [2], this allows the parser to print out test results incrementally.
Currently, that's
kunit: tool: print parsed test results fully incrementally
With the parser rework [1] and run_kernel() rework [2], this allows the parser to print out test results incrementally.
Currently, that's held up by the fact that the LineStream eagerly pre-fetches the next line when you call pop(). This blocks parse_test_result() from returning until the line *after* the "ok 1 - test name" line is also printed.
One can see this with the following example: $ (echo -e 'TAP version 14\n1..3\nok 1 - fake test'; sleep 2; echo -e 'ok 2 - fake test 2'; sleep 3; echo -e 'ok 3 - fake test 3') | ./tools/testing/kunit/kunit.py parse
Before this patch [1]: there's a pause before 'fake test' is printed. After this patch: 'fake test' is printed out immediately.
This patch also adds * a unit test to verify LineStream's behavior directly * a test case to ensure that it's lazily calling the generator * an explicit exception for when users go beyond EOF
[1] https://lore.kernel.org/linux-kselftest/[email protected]/ [2] https://lore.kernel.org/linux-kselftest/[email protected]/
Signed-off-by: Daniel Latypov <[email protected]> Reviewed-by: David Gow <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
show more ...
|