|
Revision tags: llvmorg-20.1.0, llvmorg-20.1.0-rc3, llvmorg-20.1.0-rc2, llvmorg-20.1.0-rc1, llvmorg-21-init, llvmorg-19.1.7, llvmorg-19.1.6, llvmorg-19.1.5, llvmorg-19.1.4, llvmorg-19.1.3, llvmorg-19.1.2, llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4, llvmorg-19.1.0-rc3, llvmorg-19.1.0-rc2, llvmorg-19.1.0-rc1, llvmorg-20-init, llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5, llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2, llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init, llvmorg-17.0.6, llvmorg-17.0.5, llvmorg-17.0.4, llvmorg-17.0.3, llvmorg-17.0.2, llvmorg-17.0.1, llvmorg-17.0.0, llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2, llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6, llvmorg-16.0.5, llvmorg-16.0.4, llvmorg-16.0.3, llvmorg-16.0.2, llvmorg-16.0.1, llvmorg-16.0.0, llvmorg-16.0.0-rc4, llvmorg-16.0.0-rc3, llvmorg-16.0.0-rc2, llvmorg-16.0.0-rc1, llvmorg-17-init, llvmorg-15.0.7, llvmorg-15.0.6, llvmorg-15.0.5, llvmorg-15.0.4, llvmorg-15.0.3, llvmorg-15.0.2, llvmorg-15.0.1, llvmorg-15.0.0, llvmorg-15.0.0-rc3, llvmorg-15.0.0-rc2, llvmorg-15.0.0-rc1, llvmorg-16-init |
|
| #
a1ec0d8b |
| 21-Jul-2022 |
Jacques Pienaar <[email protected]> |
[mlir] Flip dialects to _Prefixed
At least two weeks passed since flipped to _Both. Made some additional NFC changes in .td files that were not converted earlier.
|
| #
2b8a4d9c |
| 15-Jul-2022 |
Jim Kitchen <[email protected]> |
[mlir][sparse] Introduce new reduce op
A new sparse_tensor operation allows for custom reduction code to be injected during linalg.generic lowering for sparse tensors. An identity value is provided
[mlir][sparse] Introduce new reduce op
A new sparse_tensor operation allows for custom reduction code to be injected during linalg.generic lowering for sparse tensors. An identity value is provided to indicate the starting value of the reduction. A single block region is required to contain the custom reduce computation.
Reviewed by: aartbik
Differential Revision: https://reviews.llvm.org/D128004
show more ...
|
| #
04235d07 |
| 28-Jun-2022 |
Jacques Pienaar <[email protected]> |
[mlir] Update flipped accessors (NFC)
Follow up with memref flipped and flipping any intermediate changes made.
|
|
Revision tags: llvmorg-14.0.6, llvmorg-14.0.5 |
|
| #
3cf03f1c |
| 03-Jun-2022 |
wren romano <[email protected]> |
[mlir][sparse] Adding IsSparseTensorPred and updating ops to use it
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D126994
|
| #
6232a8f3 |
| 01-Jun-2022 |
Matthias Springer <[email protected]> |
[mlir][sparse][NFC] Switch InitOp to bufferization::AllocTensorOp
Now that we have an AllocTensorOp (previously InitTensorOp) in the bufferization dialect, the InitOp in the sparse dialect is no lon
[mlir][sparse][NFC] Switch InitOp to bufferization::AllocTensorOp
Now that we have an AllocTensorOp (previously InitTensorOp) in the bufferization dialect, the InitOp in the sparse dialect is no longer needed.
Differential Revision: https://reviews.llvm.org/D126180
show more ...
|
|
Revision tags: llvmorg-14.0.4, llvmorg-14.0.3, llvmorg-14.0.2, llvmorg-14.0.1 |
|
| #
414ed019 |
| 17-Mar-2022 |
Jim Kitchen <[email protected]> |
[mlir][sparse] Introduce new binary and unary op
When the sparse_tensor dialect lowers linalg.generic, it makes inferences about how the operations should affect the looping logic. For example, mult
[mlir][sparse] Introduce new binary and unary op
When the sparse_tensor dialect lowers linalg.generic, it makes inferences about how the operations should affect the looping logic. For example, multiplication is an intersection while addition is a union of two sparse tensors.
The new binary and unary op separate the looping logic from the computation by nesting the computation code inside a block which is merged at the appropriate level in the lowered looping code.
The binary op can have custom computation code for the overlap, left, and right sparse overlap regions. The unary op can have custom computation code for the present and absent values.
Reviewed by: aartbik
Differential Revision: https://reviews.llvm.org/D121018
show more ...
|
|
Revision tags: llvmorg-14.0.0, llvmorg-14.0.0-rc4, llvmorg-14.0.0-rc3, llvmorg-14.0.0-rc2 |
|
| #
5517208d |
| 12-Feb-2022 |
Aart Bik <[email protected]> |
[mlir][sparse] minor cleanup of include placement
Rationale: empty line between main include for this file moved include that actually defines code into right section
Note that this revision starte
[mlir][sparse] minor cleanup of include placement
Rationale: empty line between main include for this file moved include that actually defines code into right section
Note that this revision started as breaking up ops/attrs even more (for bug https://github.com/llvm/llvm-project/issues/52748), but due the the connection in Dialect.initalize(), this cannot be split further). All heavy lifting refactoring was already done by River in previous cleanup.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D119617
show more ...
|
|
Revision tags: llvmorg-14.0.0-rc1 |
|
| #
b98dc035 |
| 02-Feb-2022 |
River Riddle <[email protected]> |
[mlir][NFC] Update MemRef/Tensor operations to use `hasVerifier` instead of `verifier`
The verifier field is deprecated, and slated for removal.
Differential Revision: https://reviews.llvm.org/D118
[mlir][NFC] Update MemRef/Tensor operations to use `hasVerifier` instead of `verifier`
The verifier field is deprecated, and slated for removal.
Differential Revision: https://reviews.llvm.org/D118821
show more ...
|
|
Revision tags: llvmorg-15-init |
|
| #
65e7cd13 |
| 24-Jan-2022 |
River Riddle <[email protected]> |
[mlir] Remove a bunch of unnecessary dialect dependencies
A lot of dialects have dependencies that are unnecessary, either because of copy/paste of files when creating things or some other means. Th
[mlir] Remove a bunch of unnecessary dialect dependencies
A lot of dialects have dependencies that are unnecessary, either because of copy/paste of files when creating things or some other means. This commit cleans up a bunch of the simple ones:
* Copy/Paste or missed during refactoring Most of the dependencies cleaned up here look like copy/paste errors when creating new dialects/transformations, or because the dependency wasn't removed during a refactoring (e.g. when splitting the standard dialect).
* Unnecessary hard coding of constant operations in matchers There are a few instances where a dialect had a dependency because it was hardcoding checks for constant operations instead of using the better m_Constant approach.
Differential Revision: https://reviews.llvm.org/D118062
show more ...
|
| #
efa15f41 |
| 21-Jan-2022 |
Aart Bik <[email protected]> |
[mlir][sparse] add ability for sparse tensor output
Rationale: Although file I/O is a bit alien to MLIR itself, we provide two convenient ways for sparse tensor I/O. The input part was already there
[mlir][sparse] add ability for sparse tensor output
Rationale: Although file I/O is a bit alien to MLIR itself, we provide two convenient ways for sparse tensor I/O. The input part was already there (behind the swiss army knife sparse_tensor.new). Now we have a sparse_tensor.out to write out data. As before, the ops are kept vague and may change in the future. For now this allows us to compare TACO vs MLIR very easily.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D117850
show more ...
|
|
Revision tags: llvmorg-13.0.1, llvmorg-13.0.1-rc3 |
|
| #
c8e047f5 |
| 18-Jan-2022 |
Mehdi Amini <[email protected]> |
Enable useDefault{Type/Attribute}PrinterParser by default in ODS Dialect definition
The majority of dialects reimplement the same boilerplate over and over, switching the default makes it for better
Enable useDefault{Type/Attribute}PrinterParser by default in ODS Dialect definition
The majority of dialects reimplement the same boilerplate over and over, switching the default makes it for better discoverability and make it simpler to implement new dialects.
Differential Revision: https://reviews.llvm.org/D117524
show more ...
|
|
Revision tags: llvmorg-13.0.1-rc2 |
|
| #
e5639b3f |
| 22-Dec-2021 |
Mehdi Amini <[email protected]> |
Fix more clang-tidy cleanups in mlir/ (NFC)
|
| #
4f2ec7f9 |
| 04-Dec-2021 |
Aart Bik <[email protected]> |
[mlir][sparse] finalize sparse output in the presence of reductions
This revision implements sparse outputs (from scratch) in all cases where the loops can be reordered with all but one parallel loo
[mlir][sparse] finalize sparse output in the presence of reductions
This revision implements sparse outputs (from scratch) in all cases where the loops can be reordered with all but one parallel loops outer. If the inner parallel loop appears inside one or more reductions loops, then an access pattern expansion is required (aka. workspaces in TACO speak).
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D115091
show more ...
|
|
Revision tags: llvmorg-13.0.1-rc1 |
|
| #
0c7890c8 |
| 18-Nov-2021 |
River Riddle <[email protected]> |
[mlir] Convert NamedAttribute to be a class
NamedAttribute is currently represented as an std::pair, but this creates an extremely clunky .first/.second API. This commit converts it to a class, with
[mlir] Convert NamedAttribute to be a class
NamedAttribute is currently represented as an std::pair, but this creates an extremely clunky .first/.second API. This commit converts it to a class, with better accessors (getName/getValue) and also opens the door for more convenient API in the future.
Differential Revision: https://reviews.llvm.org/D113956
show more ...
|
| #
f66e5769 |
| 11-Nov-2021 |
Aart Bik <[email protected]> |
[mlir][sparse] first version of "truly" dynamic sparse tensors as outputs of kernels
This revision contains all "sparsification" ops and rewriting necessary to support sparse output tensors when the
[mlir][sparse] first version of "truly" dynamic sparse tensors as outputs of kernels
This revision contains all "sparsification" ops and rewriting necessary to support sparse output tensors when the kernel has no reduction (viz. insertions occur in lexicographic order and are "injective"). This will be later generalized to allow reductions too. Also, this first revision only supports sparse 1-d tensors (viz. vectors) as output in the runtime support library. This will be generalized to n-d tensors shortly. But this way, the revision is kept to a manageable size.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D113705
show more ...
|
| #
f97e72aa |
| 11-Nov-2021 |
Mehdi Amini <[email protected]> |
Use base class AsmParser/AsmPrinter in Types and Attribute print/parse method (NFC)
This decouples the printing/parsing from the "context" in which the parsing occurs. This will allow to invoke thes
Use base class AsmParser/AsmPrinter in Types and Attribute print/parse method (NFC)
This decouples the printing/parsing from the "context" in which the parsing occurs. This will allow to invoke these methods directly using an OpAsmParser/OpAsmPrinter.
Differential Revision: https://reviews.llvm.org/D113637
show more ...
|
| #
f30a8a6f |
| 10-Nov-2021 |
Mehdi Amini <[email protected]> |
Change the contract with the type/attribute parsing to let the dispatch handle the mnemonic
This breaking change requires to remove printing the mnemonic in the print() method on Type/Attribute clas
Change the contract with the type/attribute parsing to let the dispatch handle the mnemonic
This breaking change requires to remove printing the mnemonic in the print() method on Type/Attribute classes. This makes it consistent with the parsing code which alread handles the mnemonic outside of the parsing method.
This likely won't break the build for anyone, but tests will start failing for dialects downstream. The fix is trivial and look like going from:
void emitc::OpaqueType::print(DialectAsmPrinter &printer) const { printer << "opaque<\"";
to:
void emitc::OpaqueAttr::print(DialectAsmPrinter &printer) const { printer << "<\"";
Reviewed By: rriddle, aartbik
Differential Revision: https://reviews.llvm.org/D113334
show more ...
|
| #
4aa9b398 |
| 04-Nov-2021 |
Aart Bik <[email protected]> |
[mlir][sparse] reject sparsity annotation in "scalar" tensors
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D113152
|
| #
1e6ef0cf |
| 26-Oct-2021 |
Aart Bik <[email protected]> |
[mlir][sparse] refine trait of sparse_tensor.convert
Rationale: The currently used trait was demanding that all types are the same which is not true (since the sparse part may change and the dim siz
[mlir][sparse] refine trait of sparse_tensor.convert
Rationale: The currently used trait was demanding that all types are the same which is not true (since the sparse part may change and the dim sizes may be relaxed). This revision uses the correct trait and makes the rank match test explicit in the verify method.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D112576
show more ...
|
| #
cfb72fd3 |
| 25-Oct-2021 |
Jacques Pienaar <[email protected]> |
[mlir] Switch arith, llvm, std & shape dialects to accessors prefixed both form.
Following https://llvm.discourse.group/t/psa-ods-generated-accessors-will-change-to-have-a-get-prefix-update-you-apis
[mlir] Switch arith, llvm, std & shape dialects to accessors prefixed both form.
Following https://llvm.discourse.group/t/psa-ods-generated-accessors-will-change-to-have-a-get-prefix-update-you-apis/4476, this follows flipping these dialects to _Both prefixed form. This changes the accessors to have a prefix. This was possibly mostly without breaking breaking changes if the existing convenience methods were used.
(https://github.com/jpienaar/llvm-project/blob/main/clang-tools-extra/clang-tidy/misc/AddGetterCheck.cpp was used to migrate the callers post flipping, using the output from Operator.cpp)
Differential Revision: https://reviews.llvm.org/D112383
show more ...
|
| #
9d1db3d4 |
| 15-Oct-2021 |
Aart Bik <[email protected]> |
[mlir][sparse] generalize sparse_tensor.convert on static/dynamic dimension sizes
This revison lifts the artificial restriction on having exact matches between source and destination type shapes. A
[mlir][sparse] generalize sparse_tensor.convert on static/dynamic dimension sizes
This revison lifts the artificial restriction on having exact matches between source and destination type shapes. A static size may become dynamic. We still reject changing a dynamic size into a static size to avoid the need for a runtime "assert" on the conversion. This revision also refactors some of the conversion code to share same-content buffers.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D111915
show more ...
|
| #
a652e5b5 |
| 13-Oct-2021 |
Aart Bik <[email protected]> |
[mlir][sparse] emergency fix after constant -> arith.constant change
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D111743
|
| #
35517a25 |
| 12-Oct-2021 |
Aart Bik <[email protected]> |
[mlir][sparse] add init sparse tensor operation
This is the first step towards supporting general sparse tensors as output of operations. The init sparse tensor is used to materialize an empty spars
[mlir][sparse] add init sparse tensor operation
This is the first step towards supporting general sparse tensors as output of operations. The init sparse tensor is used to materialize an empty sparse tensor of given shape and sparsity into a subsequent computation (similar to the dense tensor init operation counterpart).
Example: %c = sparse_tensor.init %d1, %d2 : tensor<?x?xf32, #SparseMatrix> %0 = linalg.matmul ins(%a, %b: tensor<?x?xf32>, tensor<?x?xf32>) outs(%c: tensor<?x?xf32, #SparseMatrix>) -> tensor<?x?xf32, #SparseMatrix>
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D111684
show more ...
|
| #
a54f4eae |
| 12-Oct-2021 |
Mogball <[email protected]> |
[MLIR] Replace std ops with arith dialect ops
Precursor: https://reviews.llvm.org/D110200
Removed redundant ops from the standard dialect that were moved to the `arith` or `math` dialects.
Renamed
[MLIR] Replace std ops with arith dialect ops
Precursor: https://reviews.llvm.org/D110200
Removed redundant ops from the standard dialect that were moved to the `arith` or `math` dialects.
Renamed all instances of operations in the codebase and in tests.
Reviewed By: rriddle, jpienaar
Differential Revision: https://reviews.llvm.org/D110797
show more ...
|
| #
16b8f4dd |
| 04-Oct-2021 |
Aart Bik <[email protected]> |
[mlir][sparse] add a "release" operation to sparse tensor dialect
We have several ways to materialize sparse tensors (new and convert) but no explicit operation to release the underlying sparse stor
[mlir][sparse] add a "release" operation to sparse tensor dialect
We have several ways to materialize sparse tensors (new and convert) but no explicit operation to release the underlying sparse storage scheme at runtime (other than making an explicit delSparseTensor() library call). To simplify memory management, a sparse_tensor.release operation has been introduced that lowers to the runtime library call while keeping tensors, opague pointers, and memrefs transparent in the initial IR.
*Note* There is obviously some tension between the concept of immutable tensors and memory management methods. This tension is addressed by simply stating that after the "release" call, no further memref related operations are allowed on the tensor value. We expect the design to evolve over time, however, and arrive at a more satisfactory view of tensors and buffers eventually.
Bug: http://llvm.org/pr52046
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D111099
show more ...
|