|
Revision tags: llvmorg-20.1.0, llvmorg-20.1.0-rc3, llvmorg-20.1.0-rc2, llvmorg-20.1.0-rc1, llvmorg-21-init, llvmorg-19.1.7, llvmorg-19.1.6, llvmorg-19.1.5, llvmorg-19.1.4, llvmorg-19.1.3, llvmorg-19.1.2, llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4, llvmorg-19.1.0-rc3, llvmorg-19.1.0-rc2, llvmorg-19.1.0-rc1, llvmorg-20-init, llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5, llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2, llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init, llvmorg-17.0.6, llvmorg-17.0.5, llvmorg-17.0.4, llvmorg-17.0.3, llvmorg-17.0.2, llvmorg-17.0.1, llvmorg-17.0.0, llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2, llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6, llvmorg-16.0.5, llvmorg-16.0.4, llvmorg-16.0.3, llvmorg-16.0.2, llvmorg-16.0.1, llvmorg-16.0.0, llvmorg-16.0.0-rc4, llvmorg-16.0.0-rc3, llvmorg-16.0.0-rc2, llvmorg-16.0.0-rc1, llvmorg-17-init, llvmorg-15.0.7, llvmorg-15.0.6, llvmorg-15.0.5, llvmorg-15.0.4, llvmorg-15.0.3, llvmorg-15.0.2, llvmorg-15.0.1, llvmorg-15.0.0, llvmorg-15.0.0-rc3, llvmorg-15.0.0-rc2, llvmorg-15.0.0-rc1, llvmorg-16-init, llvmorg-14.0.6, llvmorg-14.0.5, llvmorg-14.0.4, llvmorg-14.0.3, llvmorg-14.0.2 |
|
| #
4620032e |
| 24-Apr-2022 |
Nick Kreeger <[email protected]> |
Revert "[mlir][sparse] Expose SpareTensor passes as enums instead of opaque numbers for vectorization and parallelization options."
This reverts commit d59cf901cbae7991f7847eb038d825efff1221ad.
Bui
Revert "[mlir][sparse] Expose SpareTensor passes as enums instead of opaque numbers for vectorization and parallelization options."
This reverts commit d59cf901cbae7991f7847eb038d825efff1221ad.
Build fails on NVIDIA Sparse tests: https://lab.llvm.org/buildbot/#/builders/61/builds/25447
show more ...
|
| #
d59cf901 |
| 24-Apr-2022 |
Nick Kreeger <[email protected]> |
[mlir][sparse] Expose SpareTensor passes as enums instead of opaque numbers for vectorization and parallelization options.
The SparseTensor passes currently use opaque numbers for the CLI, despite u
[mlir][sparse] Expose SpareTensor passes as enums instead of opaque numbers for vectorization and parallelization options.
The SparseTensor passes currently use opaque numbers for the CLI, despite using an enum internally. This patch exposes the enums instead of numbered items that are matched back to the enum.
Fixes GitHub issue #53389
Reviewed by: aartbik, mehdi_amini
Differential Revision: https://reviews.llvm.org/D123876
show more ...
|
| #
2310ced8 |
| 20-Apr-2022 |
River Riddle <[email protected]> |
[mlir][NFC] Update textual references of `func` to `func.func` in examples+python scripts
The special case parsing of `func` operations is being removed.
|
|
Revision tags: llvmorg-14.0.1 |
|
| #
28063a28 |
| 08-Apr-2022 |
Aart Bik <[email protected]> |
[mlir][sparse] refactored python setup of sparse compiler
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D123419
|
|
Revision tags: llvmorg-14.0.0, llvmorg-14.0.0-rc4, llvmorg-14.0.0-rc3 |
|
| #
36550692 |
| 08-Mar-2022 |
River Riddle <[email protected]> |
[mlir] Move the Builtin FuncOp to the Func dialect
This commit moves FuncOp out of the builtin dialect, and into the Func dialect. This move has been planned in some capacity from the moment we made
[mlir] Move the Builtin FuncOp to the Func dialect
This commit moves FuncOp out of the builtin dialect, and into the Func dialect. This move has been planned in some capacity from the moment we made FuncOp an operation (years ago). This commit handles the functional aspects of the move, but various aspects are left untouched to ease migration: func::FuncOp is re-exported into mlir to reduce the actual API churn, the assembly format still accepts the unqualified `func`. These temporary measures will remain for a little while to simplify migration before being removed.
Differential Revision: https://reviews.llvm.org/D121266
show more ...
|
|
Revision tags: llvmorg-14.0.0-rc2 |
|
| #
8b83b8f1 |
| 22-Feb-2022 |
Aart Bik <[email protected]> |
[mlir][sparse] refactor sparse compiler pipeline to single place
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D120347
|
| #
515c6170 |
| 16-Feb-2022 |
Aart Bik <[email protected]> |
[mlir][linalg][sparse] add linalg optimization passes "upstream"
It is time to compose Linalg related optimizations with SparseTensor related optimizations. This is a careful first start by adding s
[mlir][linalg][sparse] add linalg optimization passes "upstream"
It is time to compose Linalg related optimizations with SparseTensor related optimizations. This is a careful first start by adding some general Linalg optimizations "upstream" of the sparse compiler in the full sparse compiler pipeline. Some minor changes were needed to make those optimizations aware of sparsity.
Note that after this, we will add a sparse specific fusion rule, just to demonstrate the power of the new composition.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D119971
show more ...
|
|
Revision tags: llvmorg-14.0.0-rc1, llvmorg-15-init |
|
| #
4998b1a6 |
| 01-Feb-2022 |
wren romano <[email protected]> |
[mlir][sparse] Updating sparse-compiler pipeline for python usage
Explicitly nests passes for FuncOp, adds more options to the sparse-compiler pipeline, and updates python integration tests. This s
[mlir][sparse] Updating sparse-compiler pipeline for python usage
Explicitly nests passes for FuncOp, adds more options to the sparse-compiler pipeline, and updates python integration tests. This should be sufficient to close https://github.com/llvm/llvm-project/issues/51751
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D118658
show more ...
|
| #
dec8af70 |
| 31-Jan-2022 |
River Riddle <[email protected]> |
[mlir] Move SelectOp from Standard to Arithmetic
This is part of splitting up the standard dialect. See https://llvm.discourse.group/t/standard-dialect-the-final-chapter/ for discussion.
Differenti
[mlir] Move SelectOp from Standard to Arithmetic
This is part of splitting up the standard dialect. See https://llvm.discourse.group/t/standard-dialect-the-final-chapter/ for discussion.
Differential Revision: https://reviews.llvm.org/D118648
show more ...
|
| #
ab47418d |
| 30-Jan-2022 |
Matthias Springer <[email protected]> |
[mlir][bufferize] Merge tensor-constant-bufferize into arith-bufferize
The bufferization of arith.constant ops is also switched over to BufferizableOpInterface-based bufferization. The old implement
[mlir][bufferize] Merge tensor-constant-bufferize into arith-bufferize
The bufferization of arith.constant ops is also switched over to BufferizableOpInterface-based bufferization. The old implementation is deleted. Both implementations utilize GlobalCreator, now renamed to just `getGlobalFor`.
GlobalCreator no longer maintains a set of all created allocations to avoid duplicate allocations of the same constant. Instead, `getGlobalFor` scans the module to see if there is already a global allocation with the same constant value.
For compatibility reasons, it is still possible to create a pass that bufferizes only `arith.constant`. This pass (createConstantBufferizePass) could be deleted once all users were switched over to One-Shot bufferization.
Differential Revision: https://reviews.llvm.org/D118483
show more ...
|
|
Revision tags: llvmorg-13.0.1, llvmorg-13.0.1-rc3, llvmorg-13.0.1-rc2 |
|
| #
312c5140 |
| 09-Dec-2021 |
Aart Bik <[email protected]> |
[mlir][sparse] python driven test for SDDMM
explores various sparsity combinations of the SDMM kernel and verifies that the computed result is the same for all cases
Reviewed By: bixia
Differentia
[mlir][sparse] python driven test for SDDMM
explores various sparsity combinations of the SDMM kernel and verifies that the computed result is the same for all cases
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D115476
show more ...
|
| #
64e171c2 |
| 07-Dec-2021 |
Bixia Zheng <[email protected]> |
Avoid unnecessary output buffer allocation and initialization.
The sparse tensor code generator allocates memory for the output tensor. As such, we only need to allocate a MemRefDescriptor to receiv
Avoid unnecessary output buffer allocation and initialization.
The sparse tensor code generator allocates memory for the output tensor. As such, we only need to allocate a MemRefDescriptor to receive the output tensor and do not need to allocate and initialize the storage for the tensor.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D115292
show more ...
|
|
Revision tags: llvmorg-13.0.1-rc1 |
|
| #
4748cc69 |
| 18-Nov-2021 |
wren romano <[email protected]> |
[mlir][sparse] Adding a stress test
Addresses https://bugs.llvm.org/show_bug.cgi?id=52410 Depends on D114192
Reviewed By: aartbik, mehdi_amini
Differential Revision: https://reviews.llvm.org/D1141
[mlir][sparse] Adding a stress test
Addresses https://bugs.llvm.org/show_bug.cgi?id=52410 Depends on D114192
Reviewed By: aartbik, mehdi_amini
Differential Revision: https://reviews.llvm.org/D114118
show more ...
|
| #
286248db |
| 18-Nov-2021 |
wren romano <[email protected]> |
[mlir][sparse] Moving integration tests that merely use the Python API
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D114192
|