|
Revision tags: llvmorg-20.1.0, llvmorg-20.1.0-rc3, llvmorg-20.1.0-rc2, llvmorg-20.1.0-rc1, llvmorg-21-init, llvmorg-19.1.7, llvmorg-19.1.6, llvmorg-19.1.5, llvmorg-19.1.4, llvmorg-19.1.3, llvmorg-19.1.2, llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4, llvmorg-19.1.0-rc3, llvmorg-19.1.0-rc2, llvmorg-19.1.0-rc1, llvmorg-20-init, llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5, llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2, llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init, llvmorg-17.0.6, llvmorg-17.0.5, llvmorg-17.0.4, llvmorg-17.0.3, llvmorg-17.0.2, llvmorg-17.0.1, llvmorg-17.0.0, llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2, llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6, llvmorg-16.0.5, llvmorg-16.0.4, llvmorg-16.0.3, llvmorg-16.0.2, llvmorg-16.0.1, llvmorg-16.0.0, llvmorg-16.0.0-rc4, llvmorg-16.0.0-rc3, llvmorg-16.0.0-rc2, llvmorg-16.0.0-rc1, llvmorg-17-init, llvmorg-15.0.7, llvmorg-15.0.6, llvmorg-15.0.5, llvmorg-15.0.4, llvmorg-15.0.3, llvmorg-15.0.2, llvmorg-15.0.1, llvmorg-15.0.0, llvmorg-15.0.0-rc3, llvmorg-15.0.0-rc2, llvmorg-15.0.0-rc1, llvmorg-16-init |
|
| #
c9fb3c6e |
| 01-Jul-2022 |
Nicolas Vasilache <[email protected]> |
[mlir][Tensor] Update ParallelInsertSlicOp semantics to match that of InsertSliceOp
This revision updates the op semantics to also allow rank-reducing behavior as well as updates the implementation
[mlir][Tensor] Update ParallelInsertSlicOp semantics to match that of InsertSliceOp
This revision updates the op semantics to also allow rank-reducing behavior as well as updates the implementation to reuse code between the sequential and the parallel version of the op.
Depends on D128920
Differential Revision: https://reviews.llvm.org/D128985
show more ...
|
| #
7fbf55c9 |
| 30-Jun-2022 |
Nicolas Vasilache <[email protected]> |
[mlir][Tensor] Move ParallelInsertSlice to the tensor dialect
This is moslty NFC and will allow tensor.parallel_insert_slice to gain rank-reducing semantics by reusing the vast majority of the tenso
[mlir][Tensor] Move ParallelInsertSlice to the tensor dialect
This is moslty NFC and will allow tensor.parallel_insert_slice to gain rank-reducing semantics by reusing the vast majority of the tensor.insert_slice impl.
Depends on D128857
Differential Revision: https://reviews.llvm.org/D128920
show more ...
|
| #
741f8f2b |
| 29-Jun-2022 |
Nicolas Vasilache <[email protected]> |
[mlir][Tensor][NFC] Better document rank-reducing behavior of ExtractSliceOp and cleanup
|
| #
2d70eff8 |
| 27-Jun-2022 |
Jacques Pienaar <[email protected]> |
[mlir] Flip more uses to prefixed accessor form (NFC).
Try to keep the final flip small. Need to flip MemRef as there are many templated cases with it and Tensor.
|
|
Revision tags: llvmorg-14.0.6 |
|
| #
e7d3ba10 |
| 21-Jun-2022 |
Aart Bik <[email protected]> |
[mlir][sparse] accept sparse reshape (expand/collapse)
This revision makes sure we accept sparse tensors as arguments of the expand/collapse reshaping operations in the tensor dialect. Note that the
[mlir][sparse] accept sparse reshape (expand/collapse)
This revision makes sure we accept sparse tensors as arguments of the expand/collapse reshaping operations in the tensor dialect. Note that the actual lowering to runnable IR is still TBD.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D128311
show more ...
|
| #
6d5fc1e3 |
| 21-Jun-2022 |
Kazu Hirata <[email protected]> |
[mlir] Don't use Optional::getValue (NFC)
|
| #
037f0995 |
| 20-Jun-2022 |
Kazu Hirata <[email protected]> |
[mlir] Don't use Optional::hasValue (NFC)
|
| #
eca86cb2 |
| 18-Jun-2022 |
Jacques Pienaar <[email protected]> |
[mlir] Start migrating more dialects to prefixed form
Marked all dialects that could be (reasonably) easily flipped to _Both prefix. Updating the accessors to prefixed form will happen in follow up,
[mlir] Start migrating more dialects to prefixed form
Marked all dialects that could be (reasonably) easily flipped to _Both prefix. Updating the accessors to prefixed form will happen in follow up, this was to flush out conflicts and to mark all dialects explicitly as I plan to flip OpBase default to _Prefixed to avoid needing to migrate new dialects.
Except for Standalone example which got flipped to _Prefixed.
Differential Revision: https://reviews.llvm.org/D128027
show more ...
|
|
Revision tags: llvmorg-14.0.5, llvmorg-14.0.4 |
|
| #
f2676b15 |
| 19-May-2022 |
Thomas Raoux <[email protected]> |
[mlir][tensor] Add canonicalization for tensor.cast from extract_slice
Propagate static size information into extract_slice producer if possible.
Differential Revision: https://reviews.llvm.org/D12
[mlir][tensor] Add canonicalization for tensor.cast from extract_slice
Propagate static size information into extract_slice producer if possible.
Differential Revision: https://reviews.llvm.org/D125972
show more ...
|
| #
f21896f2 |
| 12-May-2022 |
Chris Lattner <[email protected]> |
[DenseElementAttr] Simplify the public API for creating these.
Instead of requiring the client to compute the "isSplat" bit, compute it internally. This makes the logic more consistent and defines
[DenseElementAttr] Simplify the public API for creating these.
Instead of requiring the client to compute the "isSplat" bit, compute it internally. This makes the logic more consistent and defines away a lot of "elements.size()==1" in the clients.
This addresses Issue #55185
Differential Revision: https://reviews.llvm.org/D125447
show more ...
|
|
Revision tags: llvmorg-14.0.3, llvmorg-14.0.2 |
|
| #
eda6f907 |
| 22-Apr-2022 |
River Riddle <[email protected]> |
[mlir][NFC] Shift a bunch of dialect includes from the .h to the .cpp
Now that dialect constructors are generated in the .cpp file, we can drop all of the dependent dialect includes from the .h file
[mlir][NFC] Shift a bunch of dialect includes from the .h to the .cpp
Now that dialect constructors are generated in the .cpp file, we can drop all of the dependent dialect includes from the .h file.
Differential Revision: https://reviews.llvm.org/D124298
show more ...
|
| #
8544523d |
| 20-Apr-2022 |
Matthias Springer <[email protected]> |
[mlir][tensor] Promote extract(from_elements(...)) to folding pattern
Differential Revision: https://reviews.llvm.org/D123617
|
|
Revision tags: llvmorg-14.0.1 |
|
| #
973dbe20 |
| 11-Apr-2022 |
gysit <[email protected]> |
[mlir][tensor] Add pattern to fold ExtractSliceOp, PadOp chains.
The pattern folds chains of tensor::ExtractSliceOp, tensor::PadOp pairs if they pad different dimensions. Repeated tiling and padding
[mlir][tensor] Add pattern to fold ExtractSliceOp, PadOp chains.
The pattern folds chains of tensor::ExtractSliceOp, tensor::PadOp pairs if they pad different dimensions. Repeated tiling and padding of the tiled dimensions may introduce such chains. This canonicalization pattern folds these chains to a single tensor::ExtractSliceOp, tensor::PadOp pair that pads all dimensions at once, which simplifies vectorization and bufferization.
Example: ```mlir %0 = tensor.extract_slice %input[16, 0] [%sz0, 64] [1, 1] : tensor<64x64xf32> to tensor<?x64xf32> %1 = tensor.pad %0 low[0, 0] high[%pw0, 0] { ... } : tensor<?x64xf32> to tensor<8x64xf32> %2 = tensor.extract_slice %1[0, 4] [8, %sz1] [1, 1] : tensor<8x64xf32> to tensor<8x?xf32> %res = tensor.pad %2 nofold low[0, 0] high[0, %pw1] { ... } : tensor<8x?xf32> to tensor<8x4xf32> ``` folds into: ```mlir %0 = tensor.extract_slice %input[16, 4] [%sz0, %sz1] [1, 1] : tensor<64x64xf32> to tensor<?x?xf32> %res = tensor.pad %0 nofold low[0, 0] high[%pw0, %pw1] { ... } : tensor<?x?xf32> to tensor<8x4xf32> ```
Reviewed By: nicolasvasilache, hanchung
Differential Revision: https://reviews.llvm.org/D122722
show more ...
|
| #
747b10be |
| 06-Apr-2022 |
Alexander Belyaev <[email protected]> |
Revert "Revert "[mlir] Rewrite canonicalization of collapse(expand) and expand(collapse).""
This reverts commit 96e9b6c9dc60946f08399def879a19395bc98107.
|
| #
96e9b6c9 |
| 05-Apr-2022 |
Hanhan Wang <[email protected]> |
Revert "[mlir] Rewrite canonicalization of collapse(expand) and expand(collapse)."
This reverts commit 64f659bee67b5a024defeb3cd2ecf65e1ad8c0a7.
An invalid tensor.expand_shape op is generated with
Revert "[mlir] Rewrite canonicalization of collapse(expand) and expand(collapse)."
This reverts commit 64f659bee67b5a024defeb3cd2ecf65e1ad8c0a7.
An invalid tensor.expand_shape op is generated with the commit. To repro:
$ mlir-opt -canonicalize a.mlir
``` func @foo(%0: tensor<1x1xf32>, %1: tensor<1x1xf32>, %2: tensor<1x1xf32>) -> tensor<1x1xf32> { %cst = arith.constant 0.000000e+00 : f32 %3 = linalg.init_tensor [8, 1] : tensor<8x1xf32> %4 = linalg.fill ins(%cst : f32) outs(%3 : tensor<8x1xf32>) -> tensor<8x1xf32> %5 = tensor.collapse_shape %0 [] : tensor<1x1xf32> into tensor<f32> %6 = tensor.insert_slice %5 into %4[0, 0] [1, 1] [1, 1] : tensor<f32> into tensor<8x1xf32> %7 = linalg.init_tensor [8, 1] : tensor<8x1xf32> %8 = linalg.fill ins(%cst : f32) outs(%7 : tensor<8x1xf32>) -> tensor<8x1xf32> %9 = tensor.collapse_shape %2 [] : tensor<1x1xf32> into tensor<f32> %10 = tensor.insert_slice %9 into %8[0, 0] [1, 1] [1, 1] : tensor<f32> into tensor<8x1xf32> %11 = tensor.collapse_shape %6 [[0, 1]] : tensor<8x1xf32> into tensor<8xf32> %12 = linalg.init_tensor [8] : tensor<8xf32> %13 = linalg.generic {indexing_maps = [affine_map<(d0) -> (d0)>, affine_map<(d0) -> (d0)>], iterator_types = ["parallel"]} ins(%11 : tensor<8xf32>) outs(%12 : tensor<8xf32>) { ^bb0(%arg3: f32, %arg4: f32): linalg.yield %arg3 : f32 } -> tensor<8xf32> %14 = tensor.expand_shape %13 [[0, 1, 2, 3]] : tensor<8xf32> into tensor<1x1x8x1xf32> %15 = tensor.collapse_shape %1 [] : tensor<1x1xf32> into tensor<f32> %16 = linalg.init_tensor [] : tensor<f32> %17 = linalg.generic {indexing_maps = [affine_map<() -> ()>, affine_map<() -> ()>], iterator_types = []} ins(%15 : tensor<f32>) outs(%16 : tensor<f32>) { ^bb0(%arg3: f32, %arg4: f32): linalg.yield %arg3 : f32 } -> tensor<f32> %18 = tensor.expand_shape %17 [] : tensor<f32> into tensor<1x1x1x1xf32> %19 = tensor.collapse_shape %10 [[0, 1]] : tensor<8x1xf32> into tensor<8xf32> %20 = linalg.init_tensor [8] : tensor<8xf32> %21 = linalg.generic {indexing_maps = [affine_map<(d0) -> (d0)>, affine_map<(d0) -> (d0)>], iterator_types = ["parallel"]} ins(%19 : tensor<8xf32>) outs(%20 : tensor<8xf32>) { ^bb0(%arg3: f32, %arg4: f32): linalg.yield %arg3 : f32 } -> tensor<8xf32> %22 = tensor.expand_shape %21 [[0, 1, 2, 3]] : tensor<8xf32> into tensor<1x1x8x1xf32> %23 = linalg.mmt4d {comment = "f32*f32->f32, aarch64, matrix*vector"} ins(%14, %18 : tensor<1x1x8x1xf32>, tensor<1x1x1x1xf32>) outs(%22 : tensor<1x1x8x1xf32>) -> tensor<1x1x8x1xf32> %24 = tensor.collapse_shape %23 [[0, 1, 2, 3]] : tensor<1x1x8x1xf32> into tensor<8xf32> %25 = linalg.init_tensor [8] : tensor<8xf32> %26 = linalg.generic {indexing_maps = [affine_map<(d0) -> (d0)>, affine_map<(d0) -> (d0)>], iterator_types = ["parallel"]} ins(%24 : tensor<8xf32>) outs(%25 : tensor<8xf32>) { ^bb0(%arg3: f32, %arg4: f32): linalg.yield %arg3 : f32 } -> tensor<8xf32> %27 = tensor.expand_shape %26 [[0, 1]] : tensor<8xf32> into tensor<8x1xf32> %28 = tensor.extract_slice %27[0, 0] [1, 1] [1, 1] : tensor<8x1xf32> to tensor<f32> %29 = tensor.expand_shape %28 [] : tensor<f32> into tensor<1x1xf32> return %29 : tensor<1x1xf32> } ```
Differential Revision: https://reviews.llvm.org/D123161
show more ...
|
| #
64f659be |
| 01-Apr-2022 |
Alexander Belyaev <[email protected]> |
[mlir] Rewrite canonicalization of collapse(expand) and expand(collapse).
Differential Revision: https://reviews.llvm.org/D122666
|
| #
e13d23bc |
| 21-Mar-2022 |
Markus Böck <[email protected]> |
[mlir] Rename `OpAsmParser::OperandType` to `OpAsmParser::UnresolvedOperand`
I am not sure about the meaning of Type in the name (was it meant be interpreted as Kind?), and given the importance and
[mlir] Rename `OpAsmParser::OperandType` to `OpAsmParser::UnresolvedOperand`
I am not sure about the meaning of Type in the name (was it meant be interpreted as Kind?), and given the importance and meaning of Type in the context of MLIR, its probably better to rename it. Given the comment in the source code, the suggestion in the GitHub issue and the final discussions in the review, this patch renames the OperandType to UnresolvedOperand.
Fixes https://github.com/llvm/llvm-project/issues/54446
Differential Revision: https://reviews.llvm.org/D122142
show more ...
|
| #
b4d08dfd |
| 18-Mar-2022 |
Thomas Raoux <[email protected]> |
[mlir] Remove incorrect builders for ExpandShapeOp
ExpandShapeOp builder cannot infer the result type since it doesn't know how the dimension needs to be split. Remove this builder so that it doesn'
[mlir] Remove incorrect builders for ExpandShapeOp
ExpandShapeOp builder cannot infer the result type since it doesn't know how the dimension needs to be split. Remove this builder so that it doesn't get used accidently. Also remove one potential path using it in generic fusion.
Differential Revision: https://reviews.llvm.org/D122019
show more ...
|
| #
fdb41a22 |
| 16-Mar-2022 |
Matthias Springer <[email protected]> |
[mlir][tensor] Implement ReifyRankedShapedTypeOpInterface on GenerateOp
Differential Revision: https://reviews.llvm.org/D121520
|
|
Revision tags: llvmorg-14.0.0, llvmorg-14.0.0-rc4 |
|
| #
ed645f63 |
| 10-Mar-2022 |
Chia-hung Duan <[email protected]> |
[mlir] Support verification order (3/3)
In this CL, update the function name of verifier according to the behavior. If a verifier needs to access the region then it'll be updated to `verifyRegions`.
[mlir] Support verification order (3/3)
In this CL, update the function name of verifier according to the behavior. If a verifier needs to access the region then it'll be updated to `verifyRegions`.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D120373
show more ...
|
|
Revision tags: llvmorg-14.0.0-rc3 |
|
| #
589eac65 |
| 08-Mar-2022 |
Mahesh Ravishankar <[email protected]> |
[mlir] Add canonicalizations for op -> tensor.cast folding.
A `tensor.cast` consumer can be folded with its producer. This is beneficial only if the result of the tensor cast is more static than the
[mlir] Add canonicalizations for op -> tensor.cast folding.
A `tensor.cast` consumer can be folded with its producer. This is beneficial only if the result of the tensor cast is more static than the source. This patch adds a utility function to check that this is the case, and adds a couple of canonicalizations patterns that fold an operation with `tensor.cast` conusmers.
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D120950
show more ...
|
|
Revision tags: llvmorg-14.0.0-rc2 |
|
| #
4c901bf4 |
| 22-Feb-2022 |
Okwan Kwon <[email protected]> |
[mlir] Match Arithmetic::ConstantOp and Tensor::ExtractSliceOp.
Add a pattern matcher for ExtractSliceOp when its source is a constant.
The matching heuristics can be governed by the control functi
[mlir] Match Arithmetic::ConstantOp and Tensor::ExtractSliceOp.
Add a pattern matcher for ExtractSliceOp when its source is a constant.
The matching heuristics can be governed by the control function since generating a new constant is not always beneficial.
Differential Revision: https://reviews.llvm.org/D119605
show more ...
|
| #
4f5eb53e |
| 28-Feb-2022 |
Okwan Kwon <[email protected]> |
Revert "[mlir] Fold Arithmetic::ConstantOp and Tensor::ExtractSliceOp."
This reverts commit 3104994104f0c2f274acf5e01eb6cc82e9cca06b.
|
| #
31049941 |
| 22-Feb-2022 |
Okwan Kwon <[email protected]> |
[mlir] Fold Arithmetic::ConstantOp and Tensor::ExtractSliceOp.
Fold ExtractSliceOp when the source is a constant.
|
| #
f79f430d |
| 18-Feb-2022 |
Okwan Kwon <[email protected]> |
Fold Tensor.extract_slice into a constant splat.
Fold arith.extract_slice into arith.constant when the source is a constant splat and the result type is statically shaped.
|