|
Revision tags: llvmorg-20.1.0, llvmorg-20.1.0-rc3, llvmorg-20.1.0-rc2, llvmorg-20.1.0-rc1, llvmorg-21-init, llvmorg-19.1.7, llvmorg-19.1.6, llvmorg-19.1.5, llvmorg-19.1.4, llvmorg-19.1.3, llvmorg-19.1.2, llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4, llvmorg-19.1.0-rc3, llvmorg-19.1.0-rc2, llvmorg-19.1.0-rc1, llvmorg-20-init, llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5, llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2, llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init, llvmorg-17.0.6, llvmorg-17.0.5, llvmorg-17.0.4, llvmorg-17.0.3, llvmorg-17.0.2, llvmorg-17.0.1, llvmorg-17.0.0, llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2, llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6, llvmorg-16.0.5, llvmorg-16.0.4, llvmorg-16.0.3, llvmorg-16.0.2, llvmorg-16.0.1, llvmorg-16.0.0, llvmorg-16.0.0-rc4, llvmorg-16.0.0-rc3, llvmorg-16.0.0-rc2, llvmorg-16.0.0-rc1, llvmorg-17-init, llvmorg-15.0.7, llvmorg-15.0.6, llvmorg-15.0.5, llvmorg-15.0.4, llvmorg-15.0.3, llvmorg-15.0.2, llvmorg-15.0.1, llvmorg-15.0.0, llvmorg-15.0.0-rc3, llvmorg-15.0.0-rc2, llvmorg-15.0.0-rc1, llvmorg-16-init, llvmorg-14.0.6, llvmorg-14.0.5, llvmorg-14.0.4, llvmorg-14.0.3, llvmorg-14.0.2, llvmorg-14.0.1, llvmorg-14.0.0, llvmorg-14.0.0-rc4, llvmorg-14.0.0-rc3 |
|
| #
36550692 |
| 08-Mar-2022 |
River Riddle <[email protected]> |
[mlir] Move the Builtin FuncOp to the Func dialect
This commit moves FuncOp out of the builtin dialect, and into the Func dialect. This move has been planned in some capacity from the moment we made
[mlir] Move the Builtin FuncOp to the Func dialect
This commit moves FuncOp out of the builtin dialect, and into the Func dialect. This move has been planned in some capacity from the moment we made FuncOp an operation (years ago). This commit handles the functional aspects of the move, but various aspects are left untouched to ease migration: func::FuncOp is re-exported into mlir to reduce the actual API churn, the assembly format still accepts the unqualified `func`. These temporary measures will remain for a little while to simplify migration before being removed.
Differential Revision: https://reviews.llvm.org/D121266
show more ...
|
| #
7294be2b |
| 14-Mar-2022 |
gysit <[email protected]> |
[mlir][linalg] Replace linalg.fill by OpDSL variant.
The revision removes the linalg.fill operation and renames the OpDSL generated linalg.fill_tensor operation to replace it. After the change, all
[mlir][linalg] Replace linalg.fill by OpDSL variant.
The revision removes the linalg.fill operation and renames the OpDSL generated linalg.fill_tensor operation to replace it. After the change, all named structured operations are defined via OpDSL and there are no handwritten operations left.
A side-effect of the change is that the pretty printed form changes from: ``` %1 = linalg.fill(%cst, %0) : f32, tensor<?x?xf32> -> tensor<?x?xf32> ``` changes to ``` %1 = linalg.fill ins(%cst : f32) outs(%0 : tensor<?x?xf32>) -> tensor<?x?xf32> ``` Additionally, the builder signature now takes input and output value ranges as it is the case for all other OpDSL operations: ``` rewriter.create<linalg::FillOp>(loc, val, output) ``` changes to ``` rewriter.create<linalg::FillOp>(loc, ValueRange{val}, ValueRange{output}) ``` All other changes remain minimal. In particular, the canonicalization patterns are the same and the `value()`, `output()`, and `result()` methods are now implemented by the FillOpInterface.
Depends On D120726
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D120728
show more ...
|
|
Revision tags: llvmorg-14.0.0-rc2 |
|
| #
23aa5a74 |
| 26-Feb-2022 |
River Riddle <[email protected]> |
[mlir] Rename the Standard dialect to the Func dialect
The last remaining operations in the standard dialect all revolve around FuncOp/function related constructs. This patch simply handles the init
[mlir] Rename the Standard dialect to the Func dialect
The last remaining operations in the standard dialect all revolve around FuncOp/function related constructs. This patch simply handles the initial renaming (which by itself is already huge), but there are a large number of cleanups unlocked/necessary afterwards:
* Removing a bunch of unnecessary dependencies on Func * Cleaning up the From/ToStandard conversion passes * Preparing for the move of FuncOp to the Func dialect
See the discussion at https://discourse.llvm.org/t/standard-dialect-the-final-chapter/6061
Differential Revision: https://reviews.llvm.org/D120624
show more ...
|
| #
e9085d0d |
| 01-Mar-2022 |
gysit <[email protected]> |
[mlir][OpDSL] Rename function to make signedness explicit (NFC).
The revision renames the following OpDSL functions: ``` TypeFn.cast -> TypeFn.cast_signed BinaryFn.min -> BinaryFn.min_signed BinaryF
[mlir][OpDSL] Rename function to make signedness explicit (NFC).
The revision renames the following OpDSL functions: ``` TypeFn.cast -> TypeFn.cast_signed BinaryFn.min -> BinaryFn.min_signed BinaryFn.max -> BinaryFn.max_signed ``` The corresponding enum values on the C++ side are renamed accordingly: ``` #linalg.type_fn<cast> -> #linalg.type_fn<cast_signed> #linalg.binary_fn<min> -> #linalg.binary_fn<min_signed> #linalg.binary_fn<max> -> #linalg.binary_fn<max_signed> ```
Depends On D120110
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D120562
show more ...
|
| #
24357fec |
| 01-Mar-2022 |
gysit <[email protected]> |
[mlir][OpDSL] Add arithmetic function attributes.
The revision extends OpDSL with unary and binary function attributes. A function attribute, makes the operations used in the body of a structured op
[mlir][OpDSL] Add arithmetic function attributes.
The revision extends OpDSL with unary and binary function attributes. A function attribute, makes the operations used in the body of a structured operation configurable. For example, a pooling operation may take an aggregation function attribute that specifies if the op shall implement a min or a max pooling. The goal of this revision is to define less and more flexible operations.
We may thus for example define an element wise op: ``` linalg.elem(lhs, rhs, outs=[out], op=BinaryFn.mul) ``` If the op argument is not set the default operation is used.
Depends On D120109
Reviewed By: nicolasvasilache, aartbik
Differential Revision: https://reviews.llvm.org/D120110
show more ...
|
| #
51fdd802 |
| 25-Feb-2022 |
gysit <[email protected]> |
[mlir][OpDSL] Add type function attributes.
Previously, OpDSL operation used hardcoded type conversion operations (cast or cast_unsigned). Supporting signed and unsigned casts thus meant implementin
[mlir][OpDSL] Add type function attributes.
Previously, OpDSL operation used hardcoded type conversion operations (cast or cast_unsigned). Supporting signed and unsigned casts thus meant implementing two different operations. Type function attributes allow us to define a single operation that has a cast type function attribute which at operation instantiation time may be set to cast or cast_unsigned. We may for example, defina a matmul operation with a cast argument:
``` @linalg_structured_op def matmul(A=TensorDef(T1, S.M, S.K), B=TensorDef(T2, S.K, S.N), C=TensorDef(U, S.M, S.N, output=True), cast=TypeFnAttrDef(default=TypeFn.cast)): C[D.m, D.n] += cast(U, A[D.m, D.k]) * cast(U, B[D.k, D.n]) ```
When instantiating the operation the attribute may be set to the desired cast function:
``` linalg.matmul(lhs, rhs, outs=[out], cast=TypeFn.cast_unsigned) ```
The revsion introduces a enum in the Linalg dialect that maps one-by-one to the type functions defined by OpDSL.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D119718
show more ...
|
| #
a3655de2 |
| 11-Feb-2022 |
gysit <[email protected]> |
[mlir][OpDSL] Add support for basic rank polymorphism.
Previously, OpDSL did not support rank polymorphism, which required a separate implementation of linalg.fill. This revision extends OpDSL to su
[mlir][OpDSL] Add support for basic rank polymorphism.
Previously, OpDSL did not support rank polymorphism, which required a separate implementation of linalg.fill. This revision extends OpDSL to support rank polymorphism for a limited class of operations that access only scalars and tensors of rank zero. At operation instantiation time, it scales these scalar computations to multi-dimensional pointwise computations by replacing the empty indexing maps with identity index maps. The revision does not change the DSL itself, instead it adapts the Python emitter and the YAML generator to generate different indexing maps and and iterators depending on the rank of the first output.
Additionally, the revision introduces a `linalg.fill_tensor` operation that in a future revision shall replace the current handwritten `linalg.fill` operation. `linalg.fill_tensor` is thus only temporarily available and will be renamed to `linalg.fill`.
Reviewed By: nicolasvasilache, stellaraccident
Differential Revision: https://reviews.llvm.org/D119003
show more ...
|
|
Revision tags: llvmorg-14.0.0-rc1, llvmorg-15-init, llvmorg-13.0.1, llvmorg-13.0.1-rc3, llvmorg-13.0.1-rc2 |
|
| #
ace1d0ad |
| 28-Nov-2021 |
Stella Laurenzo <[email protected]> |
[mlir][python] Normalize asm-printing IR behavior.
While working on an integration, I found a lot of inconsistencies on IR printing and verification. It turns out that we were: * Only doing "soft
[mlir][python] Normalize asm-printing IR behavior.
While working on an integration, I found a lot of inconsistencies on IR printing and verification. It turns out that we were: * Only doing "soft fail" verification on IR printing of Operation, not of a Module. * Failed verification was interacting badly with binary=True IR printing (causing a TypeError trying to pass an `str` to a `bytes` based handle). * For systematic integrations, it is often desirable to control verification yourself so that you can explicitly handle errors.
This patch: * Trues up the "soft fail" semantics by having `Module.__str__` delegate to `Operation.__str__` vs having a shortcut implementation. * Fixes soft fail in the presence of binary=True (and adds an additional happy path test case to make sure the binary functionality works). * Adds an `assume_verified` boolean flag to the `print`/`get_asm` methods which disables internal verification, presupposing that the caller has taken care of it.
It turns out that we had a number of tests which were generating illegal IR but it wasn't being caught because they were doing a print on the `Module` vs operation. All except two were trivially fixed: * linalg/ops.py : Had two tests for direct constructing a Matmul incorrectly. Fixing them made them just like the next two tests so just deleted (no need to test the verifier only at this level). * linalg/opdsl/emit_structured_generic.py : Hand coded conv and pooling tests appear to be using illegal shaped inputs/outputs, causing a verification failure. I just used the `assume_verified=` flag to restore the original behavior and left a TODO. Will get someone who owns that to fix it properly in a followup (would also be nice to break this file up into multiple test modules as it is hard to tell exactly what is failing).
Notes to downstreams: * If, like some of our tests, you get verification failures after this patch, it is likely that your IR was always invalid and you will need to fix the root cause. To temporarily revert to prior (broken) behavior, replace calls like `print(module)` with `print(module.operation.get_asm(assume_verified=True))`.
Differential Revision: https://reviews.llvm.org/D114680
show more ...
|
|
Revision tags: llvmorg-13.0.1-rc1 |
|
| #
54c99842 |
| 18-Nov-2021 |
Michal Terepeta <[email protected]> |
[mlir][Python] Fix generation of accessors for Optional
Previously, in case there was only one `Optional` operand/result within the list, we would always return `None` from the accessor, e.g., for a
[mlir][Python] Fix generation of accessors for Optional
Previously, in case there was only one `Optional` operand/result within the list, we would always return `None` from the accessor, e.g., for a single optional result we would generate:
``` return self.operation.results[0] if len(self.operation.results) > 1 else None ```
But what we really want is to return `None` only if the length of `results` is smaller than the total number of element groups (i.e., the optional operand/result is in fact missing).
This commit also renames a few local variables in the generator to make the distinction between `isVariadic()` and `isVariableLength()` a bit more clear.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D113855
show more ...
|
| #
a54f4eae |
| 12-Oct-2021 |
Mogball <[email protected]> |
[MLIR] Replace std ops with arith dialect ops
Precursor: https://reviews.llvm.org/D110200
Removed redundant ops from the standard dialect that were moved to the `arith` or `math` dialects.
Renamed
[MLIR] Replace std ops with arith dialect ops
Precursor: https://reviews.llvm.org/D110200
Removed redundant ops from the standard dialect that were moved to the `arith` or `math` dialects.
Renamed all instances of operations in the codebase and in tests.
Reviewed By: rriddle, jpienaar
Differential Revision: https://reviews.llvm.org/D110797
show more ...
|
| #
b164f23c |
| 07-Oct-2021 |
Alex Zinenko <[email protected]> |
[mlir][python] support taking ops instead of values in op constructors
Introduce support for accepting ops instead of values when constructing ops. A single-result op can be used instead of a value,
[mlir][python] support taking ops instead of values in op constructors
Introduce support for accepting ops instead of values when constructing ops. A single-result op can be used instead of a value, including in lists of values, and any op can be used instead of a list of values. This is similar to, but more powerful, than the C++ API that allows for implicitly casting an OpType to Value if it is statically known to have a single result - the cast in Python is based on the op dynamically having a single result, and also handles the multi-result case. This allows to build IR in a more concise way:
op = dialect.produce_multiple_results() other = dialect.produce_single_result() dialect.consume_multiple_results(other, op)
instead of having to access the results manually
op = dialect.produce.multiple_results() other = dialect.produce_single_result() dialect.consume_multiple_results(other.result, op.operation.results)
The dispatch is implemented directly in Python and is triggered automatically for autogenerated OpView subclasses. Extension OpView classes should use the functions provided in ods_common.py if they want to implement this behavior. An alternative could be to implement the dispatch in the C++ bindings code, but it would require to forward opaque types through all Python functions down to a binding call, which makes it hard to inspect them in Python, e.g., to obtain the types of values.
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D111306
show more ...
|
|
Revision tags: llvmorg-13.0.0, llvmorg-13.0.0-rc4, llvmorg-13.0.0-rc3, llvmorg-13.0.0-rc2, llvmorg-13.0.0-rc1, llvmorg-14-init, llvmorg-12.0.1, llvmorg-12.0.1-rc4, llvmorg-12.0.1-rc3 |
|
| #
a21a6f51 |
| 23-Jun-2021 |
Tobias Gysi <[email protected]> |
[mlir][linalg] Change the pretty printed FillOp operand order.
The patch changes the pretty printed FillOp operand order from output, value to value, output. The change is a follow up to https://rev
[mlir][linalg] Change the pretty printed FillOp operand order.
The patch changes the pretty printed FillOp operand order from output, value to value, output. The change is a follow up to https://reviews.llvm.org/D104121 that passes the fill value using a scalar input instead of the former capture semantics.
Differential Revision: https://reviews.llvm.org/D104356
show more ...
|
|
Revision tags: llvmorg-12.0.1-rc2, llvmorg-12.0.1-rc1 |
|
| #
1f109f9d |
| 06-May-2021 |
Denys Shabalin <[email protected]> |
Fix array attribute in bindings for linalg.init_tensor
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D101998
|
| #
9f3f6d7b |
| 28-Apr-2021 |
Stella Laurenzo <[email protected]> |
Move MLIR python sources to mlir/python.
* NFC but has some fixes for CMake glitches discovered along the way (things not cleaning properly, co-mingled depends). * Includes previously unsubmitted fi
Move MLIR python sources to mlir/python.
* NFC but has some fixes for CMake glitches discovered along the way (things not cleaning properly, co-mingled depends). * Includes previously unsubmitted fix in D98681 and a TODO to fix it more appropriately in a smaller followup.
Differential Revision: https://reviews.llvm.org/D101493
show more ...
|