1================================= 2LLVM Testing Infrastructure Guide 3================================= 4 5.. contents:: 6 :local: 7 8.. toctree:: 9 :hidden: 10 11 TestSuiteGuide 12 TestSuiteMakefileGuide 13 14Overview 15======== 16 17This document is the reference manual for the LLVM testing 18infrastructure. It documents the structure of the LLVM testing 19infrastructure, the tools needed to use it, and how to add and run 20tests. 21 22Requirements 23============ 24 25In order to use the LLVM testing infrastructure, you will need all of the 26software required to build LLVM, as well as `Python <http://python.org>`_ 3.6 or 27later. 28 29LLVM Testing Infrastructure Organization 30======================================== 31 32The LLVM testing infrastructure contains three major categories of tests: 33unit tests, regression tests and whole programs. The unit tests and regression 34tests are contained inside the LLVM repository itself under ``llvm/unittests`` 35and ``llvm/test`` respectively and are expected to always pass -- they should be 36run before every commit. 37 38The whole programs tests are referred to as the "LLVM test suite" (or 39"test-suite") and are in the ``test-suite`` module in subversion. For 40historical reasons, these tests are also referred to as the "nightly 41tests" in places, which is less ambiguous than "test-suite" and remains 42in use although we run them much more often than nightly. 43 44Unit tests 45---------- 46 47Unit tests are written using `Google Test <https://github.com/google/googletest/blob/master/docs/primer.md>`_ 48and `Google Mock <https://github.com/google/googletest/blob/master/docs/gmock_for_dummies.md>`_ 49and are located in the ``llvm/unittests`` directory. 50In general unit tests are reserved for targeting the support library and other 51generic data structure, we prefer relying on regression tests for testing 52transformations and analysis on the IR. 53 54Regression tests 55---------------- 56 57The regression tests are small pieces of code that test a specific 58feature of LLVM or trigger a specific bug in LLVM. The language they are 59written in depends on the part of LLVM being tested. These tests are driven by 60the :doc:`Lit <CommandGuide/lit>` testing tool (which is part of LLVM), and 61are located in the ``llvm/test`` directory. 62 63Typically when a bug is found in LLVM, a regression test containing just 64enough code to reproduce the problem should be written and placed 65somewhere underneath this directory. For example, it can be a small 66piece of LLVM IR distilled from an actual application or benchmark. 67 68Testing Analysis 69---------------- 70 71An analysis is a pass that infer properties on some part of the IR and not 72transforming it. They are tested in general using the same infrastructure as the 73regression tests, by creating a separate "Printer" pass to consume the analysis 74result and print it on the standard output in a textual format suitable for 75FileCheck. 76See `llvm/test/Analysis/BranchProbabilityInfo/loop.ll <https://github.com/llvm/llvm-project/blob/main/llvm/test/Analysis/BranchProbabilityInfo/loop.ll>`_ 77for an example of such test. 78 79``test-suite`` 80-------------- 81 82The test suite contains whole programs, which are pieces of code which 83can be compiled and linked into a stand-alone program that can be 84executed. These programs are generally written in high level languages 85such as C or C++. 86 87These programs are compiled using a user specified compiler and set of 88flags, and then executed to capture the program output and timing 89information. The output of these programs is compared to a reference 90output to ensure that the program is being compiled correctly. 91 92In addition to compiling and executing programs, whole program tests 93serve as a way of benchmarking LLVM performance, both in terms of the 94efficiency of the programs generated as well as the speed with which 95LLVM compiles, optimizes, and generates code. 96 97The test-suite is located in the ``test-suite`` Subversion module. 98 99See the :doc:`TestSuiteGuide` for details. 100 101Debugging Information tests 102--------------------------- 103 104The test suite contains tests to check quality of debugging information. 105The test are written in C based languages or in LLVM assembly language. 106 107These tests are compiled and run under a debugger. The debugger output 108is checked to validate of debugging information. See README.txt in the 109test suite for more information. This test suite is located in the 110``cross-project-tests/debuginfo-tests`` directory. 111 112Quick start 113=========== 114 115The tests are located in two separate Subversion modules. The unit and 116regression tests are in the main "llvm" module under the directories 117``llvm/unittests`` and ``llvm/test`` (so you get these tests for free with the 118main LLVM tree). Use ``make check-all`` to run the unit and regression tests 119after building LLVM. 120 121The ``test-suite`` module contains more comprehensive tests including whole C 122and C++ programs. See the :doc:`TestSuiteGuide` for details. 123 124Unit and Regression tests 125------------------------- 126 127To run all of the LLVM unit tests use the check-llvm-unit target: 128 129.. code-block:: bash 130 131 % make check-llvm-unit 132 133To run all of the LLVM regression tests use the check-llvm target: 134 135.. code-block:: bash 136 137 % make check-llvm 138 139In order to get reasonable testing performance, build LLVM and subprojects 140in release mode, i.e. 141 142.. code-block:: bash 143 144 % cmake -DCMAKE_BUILD_TYPE="Release" -DLLVM_ENABLE_ASSERTIONS=On 145 146If you have `Clang <https://clang.llvm.org/>`_ checked out and built, you 147can run the LLVM and Clang tests simultaneously using: 148 149.. code-block:: bash 150 151 % make check-all 152 153To run the tests with Valgrind (Memcheck by default), use the ``LIT_ARGS`` make 154variable to pass the required options to lit. For example, you can use: 155 156.. code-block:: bash 157 158 % make check LIT_ARGS="-v --vg --vg-leak" 159 160to enable testing with valgrind and with leak checking enabled. 161 162To run individual tests or subsets of tests, you can use the ``llvm-lit`` 163script which is built as part of LLVM. For example, to run the 164``Integer/BitPacked.ll`` test by itself you can run: 165 166.. code-block:: bash 167 168 % llvm-lit ~/llvm/test/Integer/BitPacked.ll 169 170or to run all of the ARM CodeGen tests: 171 172.. code-block:: bash 173 174 % llvm-lit ~/llvm/test/CodeGen/ARM 175 176The regression tests will use the Python psutil module only if installed in a 177**non-user** location. Under Linux, install with sudo or within a virtual 178environment. Under Windows, install Python for all users and then run 179``pip install psutil`` in an elevated command prompt. 180 181For more information on using the :program:`lit` tool, see ``llvm-lit --help`` 182or the :doc:`lit man page <CommandGuide/lit>`. 183 184Debugging Information tests 185--------------------------- 186 187To run debugging information tests simply add the ``debuginfo-tests`` 188project to your ``LLVM_ENABLE_PROJECTS`` define on the cmake 189command-line. 190 191Regression test structure 192========================= 193 194The LLVM regression tests are driven by :program:`lit` and are located in the 195``llvm/test`` directory. 196 197This directory contains a large array of small tests that exercise 198various features of LLVM and to ensure that regressions do not occur. 199The directory is broken into several sub-directories, each focused on a 200particular area of LLVM. 201 202Writing new regression tests 203---------------------------- 204 205The regression test structure is very simple, but does require some 206information to be set. This information is gathered via ``cmake`` 207and is written to a file, ``test/lit.site.cfg`` in the build directory. 208The ``llvm/test`` Makefile does this work for you. 209 210In order for the regression tests to work, each directory of tests must 211have a ``lit.local.cfg`` file. :program:`lit` looks for this file to determine 212how to run the tests. This file is just Python code and thus is very 213flexible, but we've standardized it for the LLVM regression tests. If 214you're adding a directory of tests, just copy ``lit.local.cfg`` from 215another directory to get running. The standard ``lit.local.cfg`` simply 216specifies which files to look in for tests. Any directory that contains 217only directories does not need the ``lit.local.cfg`` file. Read the :doc:`Lit 218documentation <CommandGuide/lit>` for more information. 219 220Each test file must contain lines starting with "RUN:" that tell :program:`lit` 221how to run it. If there are no RUN lines, :program:`lit` will issue an error 222while running a test. 223 224RUN lines are specified in the comments of the test program using the 225keyword ``RUN`` followed by a colon, and lastly the command (pipeline) 226to execute. Together, these lines form the "script" that :program:`lit` 227executes to run the test case. The syntax of the RUN lines is similar to a 228shell's syntax for pipelines including I/O redirection and variable 229substitution. However, even though these lines may *look* like a shell 230script, they are not. RUN lines are interpreted by :program:`lit`. 231Consequently, the syntax differs from shell in a few ways. You can specify 232as many RUN lines as needed. 233 234:program:`lit` performs substitution on each RUN line to replace LLVM tool names 235with the full paths to the executable built for each tool (in 236``$(LLVM_OBJ_ROOT)/$(BuildMode)/bin)``. This ensures that :program:`lit` does 237not invoke any stray LLVM tools in the user's path during testing. 238 239Each RUN line is executed on its own, distinct from other lines unless 240its last character is ``\``. This continuation character causes the RUN 241line to be concatenated with the next one. In this way you can build up 242long pipelines of commands without making huge line lengths. The lines 243ending in ``\`` are concatenated until a RUN line that doesn't end in 244``\`` is found. This concatenated set of RUN lines then constitutes one 245execution. :program:`lit` will substitute variables and arrange for the pipeline 246to be executed. If any process in the pipeline fails, the entire line (and 247test case) fails too. 248 249Below is an example of legal RUN lines in a ``.ll`` file: 250 251.. code-block:: llvm 252 253 ; RUN: llvm-as < %s | llvm-dis > %t1 254 ; RUN: llvm-dis < %s.bc-13 > %t2 255 ; RUN: diff %t1 %t2 256 257As with a Unix shell, the RUN lines permit pipelines and I/O 258redirection to be used. 259 260There are some quoting rules that you must pay attention to when writing 261your RUN lines. In general nothing needs to be quoted. :program:`lit` won't 262strip off any quote characters so they will get passed to the invoked program. 263To avoid this use curly braces to tell :program:`lit` that it should treat 264everything enclosed as one value. 265 266In general, you should strive to keep your RUN lines as simple as possible, 267using them only to run tools that generate textual output you can then examine. 268The recommended way to examine output to figure out if the test passes is using 269the :doc:`FileCheck tool <CommandGuide/FileCheck>`. *[The usage of grep in RUN 270lines is deprecated - please do not send or commit patches that use it.]* 271 272Put related tests into a single file rather than having a separate file per 273test. Check if there are files already covering your feature and consider 274adding your code there instead of creating a new file. 275 276Extra files 277----------- 278 279If your test requires extra files besides the file containing the ``RUN:`` lines 280and the extra files are small, consider specifying them in the same file and 281using ``split-file`` to extract them. For example, 282 283.. code-block:: llvm 284 285 ; RUN: split-file %s %t 286 ; RUN: llvm-link -S %t/a.ll %t/b.ll | FileCheck %s 287 288 ; CHECK: ... 289 290 ;--- a.ll 291 ... 292 ;--- b.ll 293 ... 294 295The parts are separated by the regex ``^(.|//)--- <part>``. By default the 296extracted content has leading empty lines to preserve line numbers. Specify 297``--no-leading-lines`` to drop leading lines. 298 299If the extra files are large, the idiomatic place to put them is in a subdirectory ``Inputs``. 300You can then refer to the extra files as ``%S/Inputs/foo.bar``. 301 302For example, consider ``test/Linker/ident.ll``. The directory structure is 303as follows:: 304 305 test/ 306 Linker/ 307 ident.ll 308 Inputs/ 309 ident.a.ll 310 ident.b.ll 311 312For convenience, these are the contents: 313 314.. code-block:: llvm 315 316 ;;;;; ident.ll: 317 318 ; RUN: llvm-link %S/Inputs/ident.a.ll %S/Inputs/ident.b.ll -S | FileCheck %s 319 320 ; Verify that multiple input llvm.ident metadata are linked together. 321 322 ; CHECK-DAG: !llvm.ident = !{!0, !1, !2} 323 ; CHECK-DAG: "Compiler V1" 324 ; CHECK-DAG: "Compiler V2" 325 ; CHECK-DAG: "Compiler V3" 326 327 ;;;;; Inputs/ident.a.ll: 328 329 !llvm.ident = !{!0, !1} 330 !0 = metadata !{metadata !"Compiler V1"} 331 !1 = metadata !{metadata !"Compiler V2"} 332 333 ;;;;; Inputs/ident.b.ll: 334 335 !llvm.ident = !{!0} 336 !0 = metadata !{metadata !"Compiler V3"} 337 338For symmetry reasons, ``ident.ll`` is just a dummy file that doesn't 339actually participate in the test besides holding the ``RUN:`` lines. 340 341.. note:: 342 343 Some existing tests use ``RUN: true`` in extra files instead of just 344 putting the extra files in an ``Inputs/`` directory. This pattern is 345 deprecated. 346 347Fragile tests 348------------- 349 350It is easy to write a fragile test that would fail spuriously if the tool being 351tested outputs a full path to the input file. For example, :program:`opt` by 352default outputs a ``ModuleID``: 353 354.. code-block:: console 355 356 $ cat example.ll 357 define i32 @main() nounwind { 358 ret i32 0 359 } 360 361 $ opt -S /path/to/example.ll 362 ; ModuleID = '/path/to/example.ll' 363 364 define i32 @main() nounwind { 365 ret i32 0 366 } 367 368``ModuleID`` can unexpectedly match against ``CHECK`` lines. For example: 369 370.. code-block:: llvm 371 372 ; RUN: opt -S %s | FileCheck 373 374 define i32 @main() nounwind { 375 ; CHECK-NOT: load 376 ret i32 0 377 } 378 379This test will fail if placed into a ``download`` directory. 380 381To make your tests robust, always use ``opt ... < %s`` in the RUN line. 382:program:`opt` does not output a ``ModuleID`` when input comes from stdin. 383 384Platform-Specific Tests 385----------------------- 386 387Whenever adding tests that require the knowledge of a specific platform, 388either related to code generated, specific output or back-end features, 389you must make sure to isolate the features, so that buildbots that 390run on different architectures (and don't even compile all back-ends), 391don't fail. 392 393The first problem is to check for target-specific output, for example sizes 394of structures, paths and architecture names, for example: 395 396* Tests containing Windows paths will fail on Linux and vice-versa. 397* Tests that check for ``x86_64`` somewhere in the text will fail anywhere else. 398* Tests where the debug information calculates the size of types and structures. 399 400Also, if the test rely on any behaviour that is coded in any back-end, it must 401go in its own directory. So, for instance, code generator tests for ARM go 402into ``test/CodeGen/ARM`` and so on. Those directories contain a special 403``lit`` configuration file that ensure all tests in that directory will 404only run if a specific back-end is compiled and available. 405 406For instance, on ``test/CodeGen/ARM``, the ``lit.local.cfg`` is: 407 408.. code-block:: python 409 410 config.suffixes = ['.ll', '.c', '.cpp', '.test'] 411 if not 'ARM' in config.root.targets: 412 config.unsupported = True 413 414Other platform-specific tests are those that depend on a specific feature 415of a specific sub-architecture, for example only to Intel chips that support ``AVX2``. 416 417For instance, ``test/CodeGen/X86/psubus.ll`` tests three sub-architecture 418variants: 419 420.. code-block:: llvm 421 422 ; RUN: llc -mcpu=core2 < %s | FileCheck %s -check-prefix=SSE2 423 ; RUN: llc -mcpu=corei7-avx < %s | FileCheck %s -check-prefix=AVX1 424 ; RUN: llc -mcpu=core-avx2 < %s | FileCheck %s -check-prefix=AVX2 425 426And the checks are different: 427 428.. code-block:: llvm 429 430 ; SSE2: @test1 431 ; SSE2: psubusw LCPI0_0(%rip), %xmm0 432 ; AVX1: @test1 433 ; AVX1: vpsubusw LCPI0_0(%rip), %xmm0, %xmm0 434 ; AVX2: @test1 435 ; AVX2: vpsubusw LCPI0_0(%rip), %xmm0, %xmm0 436 437So, if you're testing for a behaviour that you know is platform-specific or 438depends on special features of sub-architectures, you must add the specific 439triple, test with the specific FileCheck and put it into the specific 440directory that will filter out all other architectures. 441 442 443Constraining test execution 444--------------------------- 445 446Some tests can be run only in specific configurations, such as 447with debug builds or on particular platforms. Use ``REQUIRES`` 448and ``UNSUPPORTED`` to control when the test is enabled. 449 450Some tests are expected to fail. For example, there may be a known bug 451that the test detect. Use ``XFAIL`` to mark a test as an expected failure. 452An ``XFAIL`` test will be successful if its execution fails, and 453will be a failure if its execution succeeds. 454 455.. code-block:: llvm 456 457 ; This test will be only enabled in the build with asserts. 458 ; REQUIRES: asserts 459 ; This test is disabled on Linux. 460 ; UNSUPPORTED: -linux- 461 ; This test is expected to fail on PowerPC. 462 ; XFAIL: powerpc 463 464``REQUIRES`` and ``UNSUPPORTED`` and ``XFAIL`` all accept a comma-separated 465list of boolean expressions. The values in each expression may be: 466 467- Features added to ``config.available_features`` by configuration files such as ``lit.cfg``. 468 String comparison of features is case-sensitive. Furthermore, a boolean expression can 469 contain any Python regular expression enclosed in ``{{ }}``, in which case the boolean 470 expression is satisfied if any feature matches the regular expression. Regular 471 expressions can appear inside an identifier, so for example ``he{{l+}}o`` would match 472 ``helo``, ``hello``, ``helllo``, and so on. 473- Substrings of the target triple (``UNSUPPORTED`` and ``XFAIL`` only). 474 475| ``REQUIRES`` enables the test if all expressions are true. 476| ``UNSUPPORTED`` disables the test if any expression is true. 477| ``XFAIL`` expects the test to fail if any expression is true. 478 479As a special case, ``XFAIL: *`` is expected to fail everywhere. 480 481.. code-block:: llvm 482 483 ; This test is disabled on Windows, 484 ; and is disabled on Linux, except for Android Linux. 485 ; UNSUPPORTED: windows, linux && !android 486 ; This test is expected to fail on both PowerPC and ARM. 487 ; XFAIL: powerpc || arm 488 489 490Substitutions 491------------- 492 493Besides replacing LLVM tool names the following substitutions are performed in 494RUN lines: 495 496``%%`` 497 Replaced by a single ``%``. This allows escaping other substitutions. 498 499``%s`` 500 File path to the test case's source. This is suitable for passing on the 501 command line as the input to an LLVM tool. 502 503 Example: ``/home/user/llvm/test/MC/ELF/foo_test.s`` 504 505``%S`` 506 Directory path to the test case's source. 507 508 Example: ``/home/user/llvm/test/MC/ELF`` 509 510``%t`` 511 File path to a temporary file name that could be used for this test case. 512 The file name won't conflict with other test cases. You can append to it 513 if you need multiple temporaries. This is useful as the destination of 514 some redirected output. 515 516 Example: ``/home/user/llvm.build/test/MC/ELF/Output/foo_test.s.tmp`` 517 518``%T`` 519 Directory of ``%t``. Deprecated. Shouldn't be used, because it can be easily 520 misused and cause race conditions between tests. 521 522 Use ``rm -rf %t && mkdir %t`` instead if a temporary directory is necessary. 523 524 Example: ``/home/user/llvm.build/test/MC/ELF/Output`` 525 526``%{pathsep}`` 527 528 Expands to the path separator, i.e. ``:`` (or ``;`` on Windows). 529 530``%/s, %/S, %/t, %/T:`` 531 532 Act like the corresponding substitution above but replace any ``\`` 533 character with a ``/``. This is useful to normalize path separators. 534 535 Example: ``%s: C:\Desktop Files/foo_test.s.tmp`` 536 537 Example: ``%/s: C:/Desktop Files/foo_test.s.tmp`` 538 539``%:s, %:S, %:t, %:T:`` 540 541 Act like the corresponding substitution above but remove colons at 542 the beginning of Windows paths. This is useful to allow concatenation 543 of absolute paths on Windows to produce a legal path. 544 545 Example: ``%s: C:\Desktop Files\foo_test.s.tmp`` 546 547 Example: ``%:s: C\Desktop Files\foo_test.s.tmp`` 548 549``%errc_<ERRCODE>`` 550 551 Some error messages may be substituted to allow different spellings 552 based on the host platform. 553 554 The following error codes are currently supported: 555 ENOENT, EISDIR, EINVAL, EACCES. 556 557 Example: ``Linux %errc_ENOENT: No such file or directory`` 558 559 Example: ``Windows %errc_ENOENT: no such file or directory`` 560 561**LLVM-specific substitutions:** 562 563``%shlibext`` 564 The suffix for the host platforms shared library files. This includes the 565 period as the first character. 566 567 Example: ``.so`` (Linux), ``.dylib`` (macOS), ``.dll`` (Windows) 568 569``%exeext`` 570 The suffix for the host platforms executable files. This includes the 571 period as the first character. 572 573 Example: ``.exe`` (Windows), empty on Linux. 574 575``%(line)``, ``%(line+<number>)``, ``%(line-<number>)`` 576 The number of the line where this substitution is used, with an optional 577 integer offset. This can be used in tests with multiple RUN lines, which 578 reference test file's line numbers. 579 580 581**Clang-specific substitutions:** 582 583``%clang`` 584 Invokes the Clang driver. 585 586``%clang_cpp`` 587 Invokes the Clang driver for C++. 588 589``%clang_cl`` 590 Invokes the CL-compatible Clang driver. 591 592``%clangxx`` 593 Invokes the G++-compatible Clang driver. 594 595``%clang_cc1`` 596 Invokes the Clang frontend. 597 598``%itanium_abi_triple``, ``%ms_abi_triple`` 599 These substitutions can be used to get the current target triple adjusted to 600 the desired ABI. For example, if the test suite is running with the 601 ``i686-pc-win32`` target, ``%itanium_abi_triple`` will expand to 602 ``i686-pc-mingw32``. This allows a test to run with a specific ABI without 603 constraining it to a specific triple. 604 605**FileCheck-specific substitutions:** 606 607``%ProtectFileCheckOutput`` 608 This should precede a ``FileCheck`` call if and only if the call's textual 609 output affects test results. It's usually easy to tell: just look for 610 redirection or piping of the ``FileCheck`` call's stdout or stderr. 611 612To add more substitutions, look at ``test/lit.cfg`` or ``lit.local.cfg``. 613 614 615Options 616------- 617 618The llvm lit configuration allows to customize some things with user options: 619 620``llc``, ``opt``, ... 621 Substitute the respective llvm tool name with a custom command line. This 622 allows to specify custom paths and default arguments for these tools. 623 Example: 624 625 % llvm-lit "-Dllc=llc -verify-machineinstrs" 626 627``run_long_tests`` 628 Enable the execution of long running tests. 629 630``llvm_site_config`` 631 Load the specified lit configuration instead of the default one. 632 633 634Other Features 635-------------- 636 637To make RUN line writing easier, there are several helper programs. These 638helpers are in the PATH when running tests, so you can just call them using 639their name. For example: 640 641``not`` 642 This program runs its arguments and then inverts the result code from it. 643 Zero result codes become 1. Non-zero result codes become 0. 644 645To make the output more useful, :program:`lit` will scan 646the lines of the test case for ones that contain a pattern that matches 647``PR[0-9]+``. This is the syntax for specifying a PR (Problem Report) number 648that is related to the test case. The number after "PR" specifies the 649LLVM Bugzilla number. When a PR number is specified, it will be used in 650the pass/fail reporting. This is useful to quickly get some context when 651a test fails. 652 653Finally, any line that contains "END." will cause the special 654interpretation of lines to terminate. This is generally done right after 655the last RUN: line. This has two side effects: 656 657(a) it prevents special interpretation of lines that are part of the test 658 program, not the instructions to the test case, and 659 660(b) it speeds things up for really big test cases by avoiding 661 interpretation of the remainder of the file. 662