|
Revision tags: llvmorg-20.1.0, llvmorg-20.1.0-rc3, llvmorg-20.1.0-rc2, llvmorg-20.1.0-rc1, llvmorg-21-init, llvmorg-19.1.7, llvmorg-19.1.6, llvmorg-19.1.5, llvmorg-19.1.4, llvmorg-19.1.3, llvmorg-19.1.2, llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4, llvmorg-19.1.0-rc3, llvmorg-19.1.0-rc2, llvmorg-19.1.0-rc1, llvmorg-20-init, llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5, llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2, llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init, llvmorg-17.0.6, llvmorg-17.0.5, llvmorg-17.0.4, llvmorg-17.0.3, llvmorg-17.0.2, llvmorg-17.0.1, llvmorg-17.0.0, llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2, llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6, llvmorg-16.0.5, llvmorg-16.0.4, llvmorg-16.0.3, llvmorg-16.0.2, llvmorg-16.0.1, llvmorg-16.0.0, llvmorg-16.0.0-rc4, llvmorg-16.0.0-rc3, llvmorg-16.0.0-rc2, llvmorg-16.0.0-rc1, llvmorg-17-init, llvmorg-15.0.7, llvmorg-15.0.6, llvmorg-15.0.5, llvmorg-15.0.4, llvmorg-15.0.3, llvmorg-15.0.2, llvmorg-15.0.1, llvmorg-15.0.0, llvmorg-15.0.0-rc3, llvmorg-15.0.0-rc2, llvmorg-15.0.0-rc1, llvmorg-16-init |
|
| #
c27d8152 |
| 14-Jul-2022 |
Kazu Hirata <[email protected]> |
[mlir] Use value instead of getValue (NFC)
|
| #
3b7c3a65 |
| 25-Jun-2022 |
Kazu Hirata <[email protected]> |
Revert "Don't use Optional::hasValue (NFC)"
This reverts commit aa8feeefd3ac6c78ee8f67bf033976fc7d68bc6d.
|
| #
aa8feeef |
| 25-Jun-2022 |
Kazu Hirata <[email protected]> |
Don't use Optional::hasValue (NFC)
|
|
Revision tags: llvmorg-14.0.6, llvmorg-14.0.5, llvmorg-14.0.4, llvmorg-14.0.3, llvmorg-14.0.2, llvmorg-14.0.1 |
|
| #
747b10be |
| 06-Apr-2022 |
Alexander Belyaev <[email protected]> |
Revert "Revert "[mlir] Rewrite canonicalization of collapse(expand) and expand(collapse).""
This reverts commit 96e9b6c9dc60946f08399def879a19395bc98107.
|
| #
96e9b6c9 |
| 05-Apr-2022 |
Hanhan Wang <[email protected]> |
Revert "[mlir] Rewrite canonicalization of collapse(expand) and expand(collapse)."
This reverts commit 64f659bee67b5a024defeb3cd2ecf65e1ad8c0a7.
An invalid tensor.expand_shape op is generated with
Revert "[mlir] Rewrite canonicalization of collapse(expand) and expand(collapse)."
This reverts commit 64f659bee67b5a024defeb3cd2ecf65e1ad8c0a7.
An invalid tensor.expand_shape op is generated with the commit. To repro:
$ mlir-opt -canonicalize a.mlir
``` func @foo(%0: tensor<1x1xf32>, %1: tensor<1x1xf32>, %2: tensor<1x1xf32>) -> tensor<1x1xf32> { %cst = arith.constant 0.000000e+00 : f32 %3 = linalg.init_tensor [8, 1] : tensor<8x1xf32> %4 = linalg.fill ins(%cst : f32) outs(%3 : tensor<8x1xf32>) -> tensor<8x1xf32> %5 = tensor.collapse_shape %0 [] : tensor<1x1xf32> into tensor<f32> %6 = tensor.insert_slice %5 into %4[0, 0] [1, 1] [1, 1] : tensor<f32> into tensor<8x1xf32> %7 = linalg.init_tensor [8, 1] : tensor<8x1xf32> %8 = linalg.fill ins(%cst : f32) outs(%7 : tensor<8x1xf32>) -> tensor<8x1xf32> %9 = tensor.collapse_shape %2 [] : tensor<1x1xf32> into tensor<f32> %10 = tensor.insert_slice %9 into %8[0, 0] [1, 1] [1, 1] : tensor<f32> into tensor<8x1xf32> %11 = tensor.collapse_shape %6 [[0, 1]] : tensor<8x1xf32> into tensor<8xf32> %12 = linalg.init_tensor [8] : tensor<8xf32> %13 = linalg.generic {indexing_maps = [affine_map<(d0) -> (d0)>, affine_map<(d0) -> (d0)>], iterator_types = ["parallel"]} ins(%11 : tensor<8xf32>) outs(%12 : tensor<8xf32>) { ^bb0(%arg3: f32, %arg4: f32): linalg.yield %arg3 : f32 } -> tensor<8xf32> %14 = tensor.expand_shape %13 [[0, 1, 2, 3]] : tensor<8xf32> into tensor<1x1x8x1xf32> %15 = tensor.collapse_shape %1 [] : tensor<1x1xf32> into tensor<f32> %16 = linalg.init_tensor [] : tensor<f32> %17 = linalg.generic {indexing_maps = [affine_map<() -> ()>, affine_map<() -> ()>], iterator_types = []} ins(%15 : tensor<f32>) outs(%16 : tensor<f32>) { ^bb0(%arg3: f32, %arg4: f32): linalg.yield %arg3 : f32 } -> tensor<f32> %18 = tensor.expand_shape %17 [] : tensor<f32> into tensor<1x1x1x1xf32> %19 = tensor.collapse_shape %10 [[0, 1]] : tensor<8x1xf32> into tensor<8xf32> %20 = linalg.init_tensor [8] : tensor<8xf32> %21 = linalg.generic {indexing_maps = [affine_map<(d0) -> (d0)>, affine_map<(d0) -> (d0)>], iterator_types = ["parallel"]} ins(%19 : tensor<8xf32>) outs(%20 : tensor<8xf32>) { ^bb0(%arg3: f32, %arg4: f32): linalg.yield %arg3 : f32 } -> tensor<8xf32> %22 = tensor.expand_shape %21 [[0, 1, 2, 3]] : tensor<8xf32> into tensor<1x1x8x1xf32> %23 = linalg.mmt4d {comment = "f32*f32->f32, aarch64, matrix*vector"} ins(%14, %18 : tensor<1x1x8x1xf32>, tensor<1x1x1x1xf32>) outs(%22 : tensor<1x1x8x1xf32>) -> tensor<1x1x8x1xf32> %24 = tensor.collapse_shape %23 [[0, 1, 2, 3]] : tensor<1x1x8x1xf32> into tensor<8xf32> %25 = linalg.init_tensor [8] : tensor<8xf32> %26 = linalg.generic {indexing_maps = [affine_map<(d0) -> (d0)>, affine_map<(d0) -> (d0)>], iterator_types = ["parallel"]} ins(%24 : tensor<8xf32>) outs(%25 : tensor<8xf32>) { ^bb0(%arg3: f32, %arg4: f32): linalg.yield %arg3 : f32 } -> tensor<8xf32> %27 = tensor.expand_shape %26 [[0, 1]] : tensor<8xf32> into tensor<8x1xf32> %28 = tensor.extract_slice %27[0, 0] [1, 1] [1, 1] : tensor<8x1xf32> to tensor<f32> %29 = tensor.expand_shape %28 [] : tensor<f32> into tensor<1x1xf32> return %29 : tensor<1x1xf32> } ```
Differential Revision: https://reviews.llvm.org/D123161
show more ...
|
| #
64f659be |
| 01-Apr-2022 |
Alexander Belyaev <[email protected]> |
[mlir] Rewrite canonicalization of collapse(expand) and expand(collapse).
Differential Revision: https://reviews.llvm.org/D122666
|
| #
89d8035e |
| 18-Mar-2022 |
Benjamin Kramer <[email protected]> |
Use llvm::append_range where applicable
It knows the size, so no need to call reserve beforehand. NFCI.
|
|
Revision tags: llvmorg-14.0.0, llvmorg-14.0.0-rc4, llvmorg-14.0.0-rc3, llvmorg-14.0.0-rc2 |
|
| #
a43f7d6d |
| 14-Feb-2022 |
Stephan Herhut <[email protected]> |
[mlir][tensor] Extend reshape utils.
This change changes the handling of trailing dimensions with unknown extent. Users of the changessociationIndicesForReshape helper should see benefits when trans
[mlir][tensor] Extend reshape utils.
This change changes the handling of trailing dimensions with unknown extent. Users of the changessociationIndicesForReshape helper should see benefits when transforming reshape like operations into expand/collapse pairs if the higher-rank type has trailing unknown dimensions.
The motivating example is a reshape from tensor<16x1x?xi32> to tensor<16xi32> that can be modeled as collapsing the three dimensions.
Differential Revision: https://reviews.llvm.org/D119730
show more ...
|
| #
1af15de6 |
| 17-Feb-2022 |
Benjamin Kramer <[email protected]> |
[mlir] Switch {collapse,expand}_shape ops to the declarative assembly format
Same functionality, a lot less code.
|
|
Revision tags: llvmorg-14.0.0-rc1, llvmorg-15-init, llvmorg-13.0.1, llvmorg-13.0.1-rc3, llvmorg-13.0.1-rc2 |
|
| #
ff5de8a9 |
| 04-Jan-2022 |
Benjamin Kramer <[email protected]> |
[linalg][fusion] Disallow fusion when it would create an invalid expand_shape
The input type of a linalg.generic can be less dynamic than its output type. If this is the case moving a reshape across
[linalg][fusion] Disallow fusion when it would create an invalid expand_shape
The input type of a linalg.generic can be less dynamic than its output type. If this is the case moving a reshape across the generic op would create invalid IR, as expand_shape cannot expand arbitrary dynamic dimensions.
Check that the reshape is actually valid before creating the expand_shape. This exposes the existing verification logic in reshape utils and removes the incomplete custom implementation in fusion.
Differential Revision: https://reviews.llvm.org/D116600
show more ...
|
| #
1fc096af |
| 02-Jan-2022 |
Mehdi Amini <[email protected]> |
Apply clang-tidy fixes for performance-unnecessary-value-param to MLIR (NFC)
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D116250
|
| #
89de9cc8 |
| 23-Dec-2021 |
Mehdi Amini <[email protected]> |
Apply clang-tidy fixes for performance-for-range-copy to MLIR (NFC)
Differential Revision: https://reviews.llvm.org/D116248
|
| #
8d474f1d |
| 29-Nov-2021 |
Benjamin Kramer <[email protected]> |
[mlir] Handle an edge case when folding reshapes with multiple trailing 1 dimensions
We would exit early and miss this case.
Differential Revision: https://reviews.llvm.org/D114711
|
|
Revision tags: llvmorg-13.0.1-rc1, llvmorg-13.0.0, llvmorg-13.0.0-rc4, llvmorg-13.0.0-rc3, llvmorg-13.0.0-rc2, llvmorg-13.0.0-rc1 |
|
| #
8ed66cb8 |
| 28-Jul-2021 |
Yi Zhang <[email protected]> |
[mlir][memref] Fix collapsed shape ops memref.cast folding with changed type
`memref.collapse_shape` has verification logic to make sure result dim must be static if all the collapsing src dims are
[mlir][memref] Fix collapsed shape ops memref.cast folding with changed type
`memref.collapse_shape` has verification logic to make sure result dim must be static if all the collapsing src dims are static. Cast folding might add more static information for the src operand of `memref.collapse_shape` which might change a valid collapsing operation to be invalid. Add `CollapseShapeOpMemRefCastFolder` pattern to fix this.
Minor changes to `convertReassociationIndicesToExprs` to use `context` instead of `builder` to avoid extra steps to construct temporary builders.
Reviewed By: nicolasvasilache, mravishankar
Differential Revision: https://reviews.llvm.org/D106670
show more ...
|
|
Revision tags: llvmorg-14-init |
|
| #
46ef86b5 |
| 16-Jul-2021 |
Alexander Belyaev <[email protected]> |
[mlir] Move linalg::Expand/CollapseShapeOp to memref dialect.
RFC: https://llvm.discourse.group/t/rfc-reshape-ops-restructuring/3310
Differential Revision: https://reviews.llvm.org/D106141
|
| #
d6595278 |
| 07-Jul-2021 |
Alexander Belyaev <[email protected]> |
[mlir] Use indices instead of affine maps when composing 2 reshape ops.
https://llvm.discourse.group/t/rfc-reshape-ops-restructuring/3310
Differential Revision: https://reviews.llvm.org/D105550
|
| #
6412a135 |
| 07-Jul-2021 |
Alexander Belyaev <[email protected]> |
[mlir] Move common reshapeops-related code to ReshapeOpsUtils.h.
This is a first step to move (Tensor)Expand/CollapseShapeOp to tensor/memref dialects.
Differential Revision: https://reviews.llvm.o
[mlir] Move common reshapeops-related code to ReshapeOpsUtils.h.
This is a first step to move (Tensor)Expand/CollapseShapeOp to tensor/memref dialects.
Differential Revision: https://reviews.llvm.org/D105547
show more ...
|