|
Revision tags: llvmorg-20.1.0, llvmorg-20.1.0-rc3, llvmorg-20.1.0-rc2, llvmorg-20.1.0-rc1, llvmorg-21-init, llvmorg-19.1.7, llvmorg-19.1.6, llvmorg-19.1.5, llvmorg-19.1.4, llvmorg-19.1.3, llvmorg-19.1.2, llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4, llvmorg-19.1.0-rc3, llvmorg-19.1.0-rc2, llvmorg-19.1.0-rc1, llvmorg-20-init, llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5, llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2, llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init, llvmorg-17.0.6, llvmorg-17.0.5, llvmorg-17.0.4, llvmorg-17.0.3, llvmorg-17.0.2, llvmorg-17.0.1, llvmorg-17.0.0, llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2, llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6, llvmorg-16.0.5, llvmorg-16.0.4, llvmorg-16.0.3, llvmorg-16.0.2, llvmorg-16.0.1, llvmorg-16.0.0, llvmorg-16.0.0-rc4, llvmorg-16.0.0-rc3, llvmorg-16.0.0-rc2, llvmorg-16.0.0-rc1, llvmorg-17-init, llvmorg-15.0.7, llvmorg-15.0.6, llvmorg-15.0.5, llvmorg-15.0.4, llvmorg-15.0.3, llvmorg-15.0.2, llvmorg-15.0.1, llvmorg-15.0.0, llvmorg-15.0.0-rc3, llvmorg-15.0.0-rc2, llvmorg-15.0.0-rc1, llvmorg-16-init |
|
| #
d2c0572b |
| 19-Jul-2022 |
Jacques Pienaar <[email protected]> |
[mlir] Flip LinAlg dialect to _Both
This one required more changes than ideal due to overlapping generated name with different return types. Changed getIndexingMaps to getIndexingMapsArray to move i
[mlir] Flip LinAlg dialect to _Both
This one required more changes than ideal due to overlapping generated name with different return types. Changed getIndexingMaps to getIndexingMapsArray to move it out of the way/highlight that it returns (more expensively) a SmallVector and uses the prefixed name for the Attribute.
Differential Revision: https://reviews.llvm.org/D129919
show more ...
|
| #
28ebb0b6 |
| 15-Jul-2022 |
Aart Bik <[email protected]> |
[mlir][sparse] migrate sparse rewriting to sparse transformations pass
The rules in the linalg file were very specific to sparse tensors so will find a better home under sparse tensor dialect than l
[mlir][sparse] migrate sparse rewriting to sparse transformations pass
The rules in the linalg file were very specific to sparse tensors so will find a better home under sparse tensor dialect than linalg dialect. Also moved some rewriting from sparsification into this new "pre-rewriting" file.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D129910
show more ...
|
| #
04235d07 |
| 28-Jun-2022 |
Jacques Pienaar <[email protected]> |
[mlir] Update flipped accessors (NFC)
Follow up with memref flipped and flipping any intermediate changes made.
|
|
Revision tags: llvmorg-14.0.6 |
|
| #
6d5fc1e3 |
| 21-Jun-2022 |
Kazu Hirata <[email protected]> |
[mlir] Don't use Optional::getValue (NFC)
|
|
Revision tags: llvmorg-14.0.5 |
|
| #
1ad9b266 |
| 25-May-2022 |
lorenzo chelini <[email protected]> |
[MLIR][Linalg] Adjust documentation (NFC)
Adjust docs for tensor.pad, tensor.collapse_shape and tensor.expand_shape.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D126370
|
|
Revision tags: llvmorg-14.0.4 |
|
| #
919e459f |
| 03-May-2022 |
Hanhan Wang <[email protected]> |
[Linalg] Remove Optional from getStaticLoopRanges interface method.
It is very wrong if the ranges can't be infered. It's also checked in verifyStructuredOpInterface, so we don't need the Optional r
[Linalg] Remove Optional from getStaticLoopRanges interface method.
It is very wrong if the ranges can't be infered. It's also checked in verifyStructuredOpInterface, so we don't need the Optional return type.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D124596
show more ...
|
|
Revision tags: llvmorg-14.0.3, llvmorg-14.0.2 |
|
| #
0c090dcc |
| 21-Apr-2022 |
Mahesh Ravishankar <[email protected]> |
[mlir][Linalg] Deprecate legacy reshape + generic op folding patterns.
These patterns have been superceded by the fusion by collapsing patterns.
Differential Revision: https://reviews.llvm.org/D124
[mlir][Linalg] Deprecate legacy reshape + generic op folding patterns.
These patterns have been superceded by the fusion by collapsing patterns.
Differential Revision: https://reviews.llvm.org/D124145
show more ...
|
| #
6120bd47 |
| 16-Apr-2022 |
Mehdi Amini <[email protected]> |
Apply clang-tidy fixes for performance-for-range-copy in ElementwiseOpFusion.cpp (NFC)
|
| #
b40e9013 |
| 12-Apr-2022 |
Mahesh Ravishankar <[email protected]> |
[mlir][Linalg] Allow collapsing subset of the reassociations when fusing by collapsing.
This change generalizes the fusion of `tensor.expand_shape` -> `linalg.generic` op by collapsing to handle cas
[mlir][Linalg] Allow collapsing subset of the reassociations when fusing by collapsing.
This change generalizes the fusion of `tensor.expand_shape` -> `linalg.generic` op by collapsing to handle cases where only a subset of the reassociations specified in the `tensor.expand_shape` are valid to be collapsed. The method that does the collapsing is refactored to allow it to be a generic utility when required.
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D123153
show more ...
|
|
Revision tags: llvmorg-14.0.1 |
|
| #
2291705d |
| 11-Apr-2022 |
Mahesh Ravishankar <[email protected]> |
[mlir][Linalg] Split `populateElementwiseOpsFusionPatterns`.
The method to add elementwise ops fusion patterns pulls in many other patterns by default. The patterns to pull in along with the element
[mlir][Linalg] Split `populateElementwiseOpsFusionPatterns`.
The method to add elementwise ops fusion patterns pulls in many other patterns by default. The patterns to pull in along with the elementwise op fusion should be upto the caller. Split the method to pull in just the elementwise ops fusion pattern. Other cleanup changes include - Move the pattern for constant folding of generic ops (currently only constant folds transpose) into a separate file, cause it is not related to fusion - Drop the uber LinalgElementwiseFusionOptions. With the populateElementwiseOpsFusionPatterns being split, this has no utility now. - Drop defaults for the control function. - Fusion of splat constants with generic ops doesnt need a control function. It is always good to do.
Differential Revision: https://reviews.llvm.org/D123236
show more ...
|
| #
01055ed1 |
| 31-Mar-2022 |
Nirvedh <[email protected]> |
[mlir][linalg] Move linalg.fill folding into linalg.generic pattern from canonicalization to elementwise fusion
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D122847
|
| #
b4d08dfd |
| 18-Mar-2022 |
Thomas Raoux <[email protected]> |
[mlir] Remove incorrect builders for ExpandShapeOp
ExpandShapeOp builder cannot infer the result type since it doesn't know how the dimension needs to be split. Remove this builder so that it doesn'
[mlir] Remove incorrect builders for ExpandShapeOp
ExpandShapeOp builder cannot infer the result type since it doesn't know how the dimension needs to be split. Remove this builder so that it doesn't get used accidently. Also remove one potential path using it in generic fusion.
Differential Revision: https://reviews.llvm.org/D122019
show more ...
|
|
Revision tags: llvmorg-14.0.0, llvmorg-14.0.0-rc4, llvmorg-14.0.0-rc3, llvmorg-14.0.0-rc2 |
|
| #
a91ade0b |
| 28-Feb-2022 |
Adrian Kuegel <[email protected]> |
[mlir] Apply ClangTidy performance fixes (NFC)
|
| #
652b39b4 |
| 24-Feb-2022 |
Aart Bik <[email protected]> |
[mlir][sparse][linalg] add linalg rewriting specific to sparse tensors
Now that sparse tensor types are first-class citizens and the sparse compiler is taking shape, it is time to make sure other co
[mlir][sparse][linalg] add linalg rewriting specific to sparse tensors
Now that sparse tensor types are first-class citizens and the sparse compiler is taking shape, it is time to make sure other compiler optimizations compose well with sparse tensors. Mostly, this should be completely transparent (i.e., dense and sparse take the same path). However, in some cases, optimizations only make sense in the context of sparse tensors. This is a first example of such an optimization, where fusing a sampled elt-wise multiplication only makes sense when the resulting kernel has a potential lower asymptotic complexity due to the sparsity.
As an extreme example, running SDDMM with 1024x1024 matrices and a sparse sampling matrix with only two elements runs in 463.55ms in the unfused case but just 0.032ms in the fused case, with a speedup of 14485x that is only possible in the exciting world of sparse computations!
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D120429
show more ...
|
| #
515c6170 |
| 16-Feb-2022 |
Aart Bik <[email protected]> |
[mlir][linalg][sparse] add linalg optimization passes "upstream"
It is time to compose Linalg related optimizations with SparseTensor related optimizations. This is a careful first start by adding s
[mlir][linalg][sparse] add linalg optimization passes "upstream"
It is time to compose Linalg related optimizations with SparseTensor related optimizations. This is a careful first start by adding some general Linalg optimizations "upstream" of the sparse compiler in the full sparse compiler pipeline. Some minor changes were needed to make those optimizations aware of sparsity.
Note that after this, we will add a sparse specific fusion rule, just to demonstrate the power of the new composition.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D119971
show more ...
|
|
Revision tags: llvmorg-14.0.0-rc1 |
|
| #
2c58cde0 |
| 07-Feb-2022 |
Mahesh Ravishankar <[email protected]> |
[mlir][Linalg] Add pattern for folding reshape by collapsing.
Fusion of `linalg.generic` with `tensor.expand_shape/tensor.collapse_shape` currently handles fusion with reshape by expanding the dimen
[mlir][Linalg] Add pattern for folding reshape by collapsing.
Fusion of `linalg.generic` with `tensor.expand_shape/tensor.collapse_shape` currently handles fusion with reshape by expanding the dimensionality of the `linalg.generic` operation. This helps fuse elementwise operations better since they are fused at the highest dimensionality while keeping all indexing maps involved projected permutations. The intent of these is to push the reshape to the boundaries of functions.
The presence of named ops (or other ops across which the reshape cannot be propagated) stops the propagation to the edges of the function. At this stage, the converse patterns that fold the reshapes with generic ops by collapsing the dimensions of the generic op can push the reshape towards edges. In particular it helps the case where reshapes exist in between named ops and generic ops.
`linalg.named_op` -> `tensor.expand_shape` -> `linalg.generic`
Pushing the reshape down will help fusion of `linalg.named_op` -> `linalg.generic` using tile + fuse transformations.
This pattern is intended to replace the following patterns
1) FoldReshapeByLinearization : These patterns create indexing maps that are not projected permutations that affect future transformations. They are only useful for folding unit-dimensions. 2) PushReshapeByExpansion : This pattern has the same functionality but has some restrictions a) It tries to avoid creating new reshapes that limits its applicability. The pattern added here can achieve the same functionality through use of the `controlFn` that allows clients of the pattern freedom to make this decision. b) It does not work for ops with indexing semantics.
These patterns will be deprecated in a future patch.
Differential Revision: https://reviews.llvm.org/D119365
show more ...
|
| #
32288d37 |
| 03-Feb-2022 |
Mahesh Ravishankar <[email protected]> |
[mli][Linalg] NFC: Refactor methods in `ElementwiseOpFusion`.
Reorder the methods and patterns to move related patterns/methods closer (textually).
Reviewed By: gysit
Differential Revision: https:
[mli][Linalg] NFC: Refactor methods in `ElementwiseOpFusion`.
Reorder the methods and patterns to move related patterns/methods closer (textually).
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D118870
show more ...
|
|
Revision tags: llvmorg-15-init |
|
| #
8e123ca6 |
| 31-Jan-2022 |
River Riddle <[email protected]> |
[mlir:Standard] Remove support for creating a `unit` ConstantOp
This is completely unused upstream, and does not really have well defined semantics on what this is supposed to do/how this fits into
[mlir:Standard] Remove support for creating a `unit` ConstantOp
This is completely unused upstream, and does not really have well defined semantics on what this is supposed to do/how this fits into the ecosystem. Given that, as part of splitting up the standard dialect it's best to just remove this behavior, instead of try to awkwardly fit it somewhere upstream. Downstream users are encouraged to define their own operations that clearly can define the semantics of this.
This also uncovered several lingering uses of ConstantOp that weren't updated to use arith::ConstantOp, and worked during conversions because the constant was removed/converted into something else before verification.
See https://llvm.discourse.group/t/standard-dialect-the-final-chapter/ for more discussion.
Differential Revision: https://reviews.llvm.org/D118654
show more ...
|
| #
d10d49dc |
| 26-Jan-2022 |
River Riddle <[email protected]> |
[mlir][NFC] Add a using for llvm::BitVector to LLVM.h
BitVector is becoming widespread enough that we should add a proper using.
Differential Revision: https://reviews.llvm.org/D118290
|
| #
ea1ac183 |
| 25-Jan-2022 |
MaheshRavishankar <[email protected]> |
[mlir][Linalg] Fix incorrect fusion with reshape ops by linearization.
Fusion of reshape ops by linearization incorrectly inverted the indexing map before linearizing dimensions. This leads to incor
[mlir][Linalg] Fix incorrect fusion with reshape ops by linearization.
Fusion of reshape ops by linearization incorrectly inverted the indexing map before linearizing dimensions. This leads to incorrect indexing maps used in the fused operation.
Differential Revision: https://reviews.llvm.org/D117908
show more ...
|
| #
e5a315f5 |
| 25-Jan-2022 |
MaheshRavishankar <[email protected]> |
[mlir][Linalg] Disallow ops with index semantics in `PushExpandingReshape`.
This pattern is not written to handle operations with `linalg.index` operations in its body, i.e. operations that have ind
[mlir][Linalg] Disallow ops with index semantics in `PushExpandingReshape`.
This pattern is not written to handle operations with `linalg.index` operations in its body, i.e. operations that have index semantics.
Differential Revision: https://reviews.llvm.org/D117856
show more ...
|
| #
a99e06aa |
| 21-Jan-2022 |
MaheshRavishankar <[email protected]> |
[mlir][Linalg] Avoid generating illegal operations during elementwise fusion.
In some cases, fusion can produce illegal operations if after fusion the range of some of the loops cannot be computed f
[mlir][Linalg] Avoid generating illegal operations during elementwise fusion.
In some cases, fusion can produce illegal operations if after fusion the range of some of the loops cannot be computed from shapes of its operands. Check for this case and abort the fusion if this happens.
Differential Revision: https://reviews.llvm.org/D117602
show more ...
|
|
Revision tags: llvmorg-13.0.1, llvmorg-13.0.1-rc3 |
|
| #
e084679f |
| 19-Jan-2022 |
River Riddle <[email protected]> |
[mlir] Make locations required when adding/creating block arguments
BlockArguments gained the ability to have locations attached a while ago, but they have always been optional. This goes against th
[mlir] Make locations required when adding/creating block arguments
BlockArguments gained the ability to have locations attached a while ago, but they have always been optional. This goes against the core tenant of MLIR where location information is a requirement, so this commit updates the API to require locations.
Fixes #53279
Differential Revision: https://reviews.llvm.org/D117633
show more ...
|
|
Revision tags: llvmorg-13.0.1-rc2 |
|
| #
ff5de8a9 |
| 04-Jan-2022 |
Benjamin Kramer <[email protected]> |
[linalg][fusion] Disallow fusion when it would create an invalid expand_shape
The input type of a linalg.generic can be less dynamic than its output type. If this is the case moving a reshape across
[linalg][fusion] Disallow fusion when it would create an invalid expand_shape
The input type of a linalg.generic can be less dynamic than its output type. If this is the case moving a reshape across the generic op would create invalid IR, as expand_shape cannot expand arbitrary dynamic dimensions.
Check that the reshape is actually valid before creating the expand_shape. This exposes the existing verification logic in reshape utils and removes the incomplete custom implementation in fusion.
Differential Revision: https://reviews.llvm.org/D116600
show more ...
|
| #
ba19fa57 |
| 08-Jan-2022 |
Mehdi Amini <[email protected]> |
Apply clang-tidy fixes for performance-for-range-copy in ElementwiseOpFusion.cpp (NFC)
|