|
Revision tags: llvmorg-20.1.0, llvmorg-20.1.0-rc3, llvmorg-20.1.0-rc2, llvmorg-20.1.0-rc1, llvmorg-21-init, llvmorg-19.1.7, llvmorg-19.1.6, llvmorg-19.1.5, llvmorg-19.1.4, llvmorg-19.1.3, llvmorg-19.1.2, llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4, llvmorg-19.1.0-rc3, llvmorg-19.1.0-rc2, llvmorg-19.1.0-rc1, llvmorg-20-init, llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5, llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2, llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init, llvmorg-17.0.6, llvmorg-17.0.5, llvmorg-17.0.4, llvmorg-17.0.3, llvmorg-17.0.2, llvmorg-17.0.1, llvmorg-17.0.0, llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2, llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6, llvmorg-16.0.5, llvmorg-16.0.4, llvmorg-16.0.3, llvmorg-16.0.2, llvmorg-16.0.1, llvmorg-16.0.0, llvmorg-16.0.0-rc4, llvmorg-16.0.0-rc3, llvmorg-16.0.0-rc2, llvmorg-16.0.0-rc1, llvmorg-17-init, llvmorg-15.0.7, llvmorg-15.0.6, llvmorg-15.0.5, llvmorg-15.0.4, llvmorg-15.0.3, llvmorg-15.0.2, llvmorg-15.0.1, llvmorg-15.0.0, llvmorg-15.0.0-rc3, llvmorg-15.0.0-rc2, llvmorg-15.0.0-rc1, llvmorg-16-init |
|
| #
27a431f5 |
| 19-Jul-2022 |
Matthias Springer <[email protected]> |
[mlir][bufferization][NFC] Move sparse_tensor.release to bufferization dialect
This op used to belong to the sparse dialect, but there are use cases for dense bufferization as well. (E.g., when a te
[mlir][bufferization][NFC] Move sparse_tensor.release to bufferization dialect
This op used to belong to the sparse dialect, but there are use cases for dense bufferization as well. (E.g., when a tensor alloc is returned from a function and should be deallocated at the call site.) This change moves the op to the bufferization dialect, which now has an `alloc_tensor` and a `dealloc_tensor` op.
Differential Revision: https://reviews.llvm.org/D129985
show more ...
|
| #
28ebb0b6 |
| 15-Jul-2022 |
Aart Bik <[email protected]> |
[mlir][sparse] migrate sparse rewriting to sparse transformations pass
The rules in the linalg file were very specific to sparse tensors so will find a better home under sparse tensor dialect than l
[mlir][sparse] migrate sparse rewriting to sparse transformations pass
The rules in the linalg file were very specific to sparse tensors so will find a better home under sparse tensor dialect than linalg dialect. Also moved some rewriting from sparsification into this new "pre-rewriting" file.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D129910
show more ...
|
| #
c66303c2 |
| 14-Jul-2022 |
Matthias Springer <[email protected]> |
[mlir][sparse] Switch to One-Shot Bufferize
This change removes the partial bufferization passes from the sparse compilation pipeline and replaces them with One-Shot Bufferize. One-Shot Analysis (an
[mlir][sparse] Switch to One-Shot Bufferize
This change removes the partial bufferization passes from the sparse compilation pipeline and replaces them with One-Shot Bufferize. One-Shot Analysis (and TensorCopyInsertion) is used to resolve all out-of-place bufferizations, dense and sparse. Dense ops are then bufferized with BufferizableOpInterface. Sparse ops are still bufferized in the Sparsification pass.
Details: * Dense allocations are automatically deallocated, unless they are yielded from a block. (In that case the alloc would leak.) All test cases are modified accordingly. E.g., some funcs now have an "out" tensor argument that is returned from the function. (That way, the allocation happens at the call site.) * Sparse allocations are *not* automatically deallocated. They must be "released" manually. (No change, this will be addressed in a future change.) * Sparse tensor copies are not supported yet. (Future change) * Sparsification no longer has to consider inplacability. If necessary, allocations and/or copies are inserted during TensorCopyInsertion. All tensors are inplaceable by the time Sparsification is running. Instead of marking a tensor as "not inplaceable", it can be marked as "not writable", which will trigger an allocation and/or copy during TensorCopyInsertion.
Differential Revision: https://reviews.llvm.org/D129356
show more ...
|
| #
c27d8152 |
| 14-Jul-2022 |
Kazu Hirata <[email protected]> |
[mlir] Use value instead of getValue (NFC)
|
| #
491d2701 |
| 13-Jul-2022 |
Kazu Hirata <[email protected]> |
[mlir] Use has_value instead of hasValue (NFC)
|
| #
faa00c13 |
| 09-Jul-2022 |
Aart Bik <[email protected]> |
[mlir][sparse] implement sparse2sparse reshaping (expand/collapse)
A previous revision implemented expand/collapse reshaping between dense and sparse tensors for sparse2dense and dense2sparse since
[mlir][sparse] implement sparse2sparse reshaping (expand/collapse)
A previous revision implemented expand/collapse reshaping between dense and sparse tensors for sparse2dense and dense2sparse since those could use the "cheap" view reshape on the already materialized dense tensor (at either the input or output side), and do some reshuffling from or to sparse. The dense2dense case, as always, is handled with a "cheap" view change.
This revision implements the sparse2sparse cases. Lacking any "view" support on sparse tensors this operation necessarily has to perform data reshuffling on both ends.
Tracker for improving this: https://github.com/llvm/llvm-project/issues/56477
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D129416
show more ...
|
| #
136d746e |
| 11-Jul-2022 |
Jacques Pienaar <[email protected]> |
[mlir] Flip accessors to prefixed form (NFC)
Another mechanical sweep to keep diff small for flip to _Prefixed.
|
| #
6d8e2f1e |
| 01-Jul-2022 |
Aart Bik <[email protected]> |
[mlir][sparse] implement simple reshaping (expand/collapse)
The revision makes a start with implementing expand/collapse reshaping for sparse tensors. When either source or destination is sparse, bu
[mlir][sparse] implement simple reshaping (expand/collapse)
The revision makes a start with implementing expand/collapse reshaping for sparse tensors. When either source or destination is sparse, but other is dense, the "cheap" dense reshape can be used prior to converting from or to a sparse tensor.
Note1 sparse to sparse reshaping is still TBD.
Note2 in the long run, we may want to implement a "view" into a sparse tensor so that the operation remains cheap and does not require data shuffling
Reviewed By: wrengr
Differential Revision: https://reviews.llvm.org/D129031
show more ...
|
| #
875ee0ed |
| 01-Jul-2022 |
wren romano <[email protected]> |
[mlir][sparse] Reducing computational complexity
This is a followup to D128847. The `AffineMap::getPermutedPosition` method performs a linear scan of the map, thus the previous implementation had a
[mlir][sparse] Reducing computational complexity
This is a followup to D128847. The `AffineMap::getPermutedPosition` method performs a linear scan of the map, thus the previous implementation had asymptotic complexity of `O(|topSort| * |m|)`. This change reduces that to `O(|topSort| + |m|)`.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D129011
show more ...
|
| #
e057f25d |
| 29-Jun-2022 |
Aart Bik <[email protected]> |
[mlir][sparse] auto-insertion of conversion to resolve cycles
When the iteration graph is cyclic (even after several attempts using less and less constraints), the current sparse compiler bails out,
[mlir][sparse] auto-insertion of conversion to resolve cycles
When the iteration graph is cyclic (even after several attempts using less and less constraints), the current sparse compiler bails out, and no rewriting hapens. However, this revision adds some new logic where the sparse compiler tries to find a single input sparse tensor that breaks the cycle, and then adds a proper sparse conversion operation. This way, more incoming kernels can be handled!
Note, the resulting code is not optimal (although it keeps more or less proper "sparse" complexity), and more improvements should be added (especially when the kernel directly yields without computation, such as the transpose example). However, handling is better than not handling ;-)
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D128847
show more ...
|
| #
eca6f916 |
| 27-Jun-2022 |
Aart Bik <[email protected]> |
[mlir][sparse][bufferization] refine bufferization assumption enforcement
Enforce the assumption made on tensor buffers explicitly. When in-place, reuse the buffer, but fill with all zeroes for the
[mlir][sparse][bufferization] refine bufferization assumption enforcement
Enforce the assumption made on tensor buffers explicitly. When in-place, reuse the buffer, but fill with all zeroes for the non-update case, since the kernel assumes all elements are written to. When not in-place, zero out the new buffer when materializing or when no-updates occur. Copy the original tensor value when updates occur. This prepares migrating to the new bufferization strategy, where these assumptions must be made explicit.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D128691
show more ...
|
| #
3b7c3a65 |
| 25-Jun-2022 |
Kazu Hirata <[email protected]> |
Revert "Don't use Optional::hasValue (NFC)"
This reverts commit aa8feeefd3ac6c78ee8f67bf033976fc7d68bc6d.
|
| #
aa8feeef |
| 25-Jun-2022 |
Kazu Hirata <[email protected]> |
Don't use Optional::hasValue (NFC)
|
|
Revision tags: llvmorg-14.0.6 |
|
| #
8b68da2c |
| 17-Jun-2022 |
Alex Zinenko <[email protected]> |
[mlir] move SCF headers to SCF/{IR,Transforms} respectively
This aligns the SCF dialect file layout with the majority of the dialects.
Reviewed By: jpienaar
Differential Revision: https://reviews.
[mlir] move SCF headers to SCF/{IR,Transforms} respectively
This aligns the SCF dialect file layout with the majority of the dialects.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D128049
show more ...
|
| #
aef20f59 |
| 17-Jun-2022 |
Aart Bik <[email protected]> |
[mlir][sparse] move from by-value to by-reference for data types
This fixes all sorts of ABI issues due to passing by-value (using by-reference with memref's exclusively).
Reviewed By: bkramer
Dif
[mlir][sparse] move from by-value to by-reference for data types
This fixes all sorts of ABI issues due to passing by-value (using by-reference with memref's exclusively).
Reviewed By: bkramer
Differential Revision: https://reviews.llvm.org/D128018
show more ...
|
| #
2a288616 |
| 16-Jun-2022 |
Aart Bik <[email protected]> |
[mlir][sparse] improved testing and codegen for semi-ring operations
The semi-ring blocks were simply "inlined" by the sparse compiler but without any filtering or patching. This revision improves t
[mlir][sparse] improved testing and codegen for semi-ring operations
The semi-ring blocks were simply "inlined" by the sparse compiler but without any filtering or patching. This revision improves the analysis (rejecting blocks that use non-invariant computations from outside their blocks, except for linalg.index) and also improves the codegen by properly patching up index computations (previous version crashed).
With a regression test. Also updated the documentation now that the example code is properly working.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D128000
show more ...
|
|
Revision tags: llvmorg-14.0.5 |
|
| #
6232a8f3 |
| 01-Jun-2022 |
Matthias Springer <[email protected]> |
[mlir][sparse][NFC] Switch InitOp to bufferization::AllocTensorOp
Now that we have an AllocTensorOp (previously InitTensorOp) in the bufferization dialect, the InitOp in the sparse dialect is no lon
[mlir][sparse][NFC] Switch InitOp to bufferization::AllocTensorOp
Now that we have an AllocTensorOp (previously InitTensorOp) in the bufferization dialect, the InitOp in the sparse dialect is no longer needed.
Differential Revision: https://reviews.llvm.org/D126180
show more ...
|
|
Revision tags: llvmorg-14.0.4 |
|
| #
5799f843 |
| 24-May-2022 |
Aart Bik <[email protected]> |
[mlir][sparse] add new complex ops to reduction recognition
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D126318
|
| #
e9fa5590 |
| 13-May-2022 |
Matthias Springer <[email protected]> |
[mlir][sparse][NFC] Use RewriterBase/OpBuilder when possible
Most functions do not need a PatternRewriter or ConversionPatternRewriter.
Differential Revision: https://reviews.llvm.org/D125466
|
| #
2617f2f7 |
| 03-May-2022 |
Aart Bik <[email protected]> |
[mlir][sparse] fix build issue with unused local under opt builds
Reviewed By: rdzhabarov
Differential Revision: https://reviews.llvm.org/D124883
|
| #
2c332660 |
| 03-May-2022 |
Jim Kitchen <[email protected]> |
[mlir][sparse] Add lowering for unary and binary ops
Adding lowering for Unary and Binary required several changes due to their unique nature of containing custom code for different "regions" of the
[mlir][sparse] Add lowering for unary and binary ops
Adding lowering for Unary and Binary required several changes due to their unique nature of containing custom code for different "regions" of the sparse structure being operated on. Along with a Kind, a pointer to the Operation is passed along to be merged once the lattice structure is figured out.
The original operation is maintained, as it is required for subsequent lattice decisions. However, sparse_tensor.binary has some branches are considered as fully handled and therefore are marked with as kBinaryBranch to distinguish them.
A unique aspect of the custom code is that sometimes the desired result is no result at all -- i.e. a user wants overlapping sparse entries to become empty in the output. The solution to this is to return an uninitialized Value(), which is checked and handled elsewhere in the code and results in nothing being written to the output tensor for that case.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D123057
show more ...
|
|
Revision tags: llvmorg-14.0.3 |
|
| #
63015742 |
| 26-Apr-2022 |
Javier Setoain <[email protected]> |
[mlir][SparseTensor] Enable VLA ops in index value generation
Current index value generation uses fixed-length vector ops, this patch adds an alterantive codegen path compatible with scalable vector
[mlir][SparseTensor] Enable VLA ops in index value generation
Current index value generation uses fixed-length vector ops, this patch adds an alterantive codegen path compatible with scalable vectors by using `LLVM::StepVectorOp`.
Differential Revision: https://reviews.llvm.org/D124454
show more ...
|
|
Revision tags: llvmorg-14.0.2 |
|
| #
58ceae95 |
| 18-Apr-2022 |
River Riddle <[email protected]> |
[mlir:NFC] Remove the forward declaration of FuncOp in the mlir namespace
FuncOp has been moved to the `func` namespace for a little over a month, the using directive can be dropped now.
|
|
Revision tags: llvmorg-14.0.1, llvmorg-14.0.0, llvmorg-14.0.0-rc4, llvmorg-14.0.0-rc3, llvmorg-14.0.0-rc2, llvmorg-14.0.0-rc1, llvmorg-15-init, llvmorg-13.0.1, llvmorg-13.0.1-rc3, llvmorg-13.0.1-rc2 |
|
| #
7783a178 |
| 02-Dec-2021 |
Javier Setoain <[email protected]> |
[mlir][Sparse] Add option for VLA sparsification
Use "enable-vla-vectorization=vla" to generate a vector length agnostic loops during vectorization. This option works for vectorization strategy 2.
[mlir][Sparse] Add option for VLA sparsification
Use "enable-vla-vectorization=vla" to generate a vector length agnostic loops during vectorization. This option works for vectorization strategy 2.
Differential Revision: https://reviews.llvm.org/D118379
show more ...
|
| #
69a7759b |
| 18-Mar-2022 |
Aart Bik <[email protected]> |
[mlir][sparse] implement loop index value vectorization
with CHECK and integration test
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D122040
|