[mlir][bufferization][NFC] Move sparse_tensor.release to bufferization dialectThis op used to belong to the sparse dialect, but there are use cases for dense bufferization as well. (E.g., when a te
[mlir][bufferization][NFC] Move sparse_tensor.release to bufferization dialectThis op used to belong to the sparse dialect, but there are use cases for dense bufferization as well. (E.g., when a tensor alloc is returned from a function and should be deallocated at the call site.) This change moves the op to the bufferization dialect, which now has an `alloc_tensor` and a `dealloc_tensor` op.Differential Revision: https://reviews.llvm.org/D129985
show more ...
[NFC] Remove obsolete all_passes_registration from integration tests.After https://reviews.llvm.org/D128593 this is not needed (and not available). Was missed in original landing because integratio
[NFC] Remove obsolete all_passes_registration from integration tests.After https://reviews.llvm.org/D128593 this is not needed (and not available). Was missed in original landing because integration tests do not run on pre-merge.
[mlir][sparse][NFC] Switch InitOp to bufferization::AllocTensorOpNow that we have an AllocTensorOp (previously InitTensorOp) in the bufferization dialect, the InitOp in the sparse dialect is no lon
[mlir][sparse][NFC] Switch InitOp to bufferization::AllocTensorOpNow that we have an AllocTensorOp (previously InitTensorOp) in the bufferization dialect, the InitOp in the sparse dialect is no longer needed.Differential Revision: https://reviews.llvm.org/D126180
[mlir][sparse] Enhancing sparse=>sparse conversion.Fixes: https://github.com/llvm/llvm-project/issues/51652Depends On D122060Reviewed By: aartbikDifferential Revision: https://reviews.llvm.or
[mlir][sparse] Enhancing sparse=>sparse conversion.Fixes: https://github.com/llvm/llvm-project/issues/51652Depends On D122060Reviewed By: aartbikDifferential Revision: https://reviews.llvm.org/D122061
Revert "[mlir][sparse] Expose SpareTensor passes as enums instead of opaque numbers for vectorization and parallelization options."This reverts commit d59cf901cbae7991f7847eb038d825efff1221ad.Bui
Revert "[mlir][sparse] Expose SpareTensor passes as enums instead of opaque numbers for vectorization and parallelization options."This reverts commit d59cf901cbae7991f7847eb038d825efff1221ad.Build fails on NVIDIA Sparse tests:https://lab.llvm.org/buildbot/#/builders/61/builds/25447
[mlir][sparse] Expose SpareTensor passes as enums instead of opaque numbers for vectorization and parallelization options.The SparseTensor passes currently use opaque numbers for the CLI, despite u
[mlir][sparse] Expose SpareTensor passes as enums instead of opaque numbers for vectorization and parallelization options.The SparseTensor passes currently use opaque numbers for the CLI, despite using an enum internally. This patch exposes the enums instead of numbered items that are matched back to the enum.Fixes GitHub issue #53389Reviewed by: aartbik, mehdi_aminiDifferential Revision: https://reviews.llvm.org/D123876
[mlir][NFC] Update textual references of `func` to `func.func` in examples+python scriptsThe special case parsing of `func` operations is being removed.
[mlir][sparse] refactored python setup of sparse compilerReviewed By: bixiaDifferential Revision: https://reviews.llvm.org/D123419
[mlir] Move the Builtin FuncOp to the Func dialectThis commit moves FuncOp out of the builtin dialect, and into the Funcdialect. This move has been planned in some capacity from the momentwe made
[mlir] Move the Builtin FuncOp to the Func dialectThis commit moves FuncOp out of the builtin dialect, and into the Funcdialect. This move has been planned in some capacity from the momentwe made FuncOp an operation (years ago). This commit handles thefunctional aspects of the move, but various aspects are left untouchedto ease migration: func::FuncOp is re-exported into mlir to reducethe actual API churn, the assembly format still accepts the unqualified`func`. These temporary measures will remain for a little while tosimplify migration before being removed.Differential Revision: https://reviews.llvm.org/D121266
[mlir] Rename the Standard dialect to the Func dialectThe last remaining operations in the standard dialect all revolve aroundFuncOp/function related constructs. This patch simply handles the init
[mlir] Rename the Standard dialect to the Func dialectThe last remaining operations in the standard dialect all revolve aroundFuncOp/function related constructs. This patch simply handles the initialrenaming (which by itself is already huge), but there are a large numberof cleanups unlocked/necessary afterwards:* Removing a bunch of unnecessary dependencies on Func* Cleaning up the From/ToStandard conversion passes* Preparing for the move of FuncOp to the Func dialectSee the discussion at https://discourse.llvm.org/t/standard-dialect-the-final-chapter/6061Differential Revision: https://reviews.llvm.org/D120624
[mlir][sparse] refactor sparse compiler pipeline to single placeReviewed By: bixiaDifferential Revision: https://reviews.llvm.org/D120347
[mlir][sparse] provide more types for external to/from MLIR routinesThese routines will need to be specialized a lot more based on value types,index types, pointer types, and permutation/dimension
[mlir][sparse] provide more types for external to/from MLIR routinesThese routines will need to be specialized a lot more based on value types,index types, pointer types, and permutation/dimension ordering. This is acareful first step, providing some functionality needed in PyTACO bridge.Reviewed By: bixiaDifferential Revision: https://reviews.llvm.org/D120154
[mlir][linalg][sparse] add linalg optimization passes "upstream"It is time to compose Linalg related optimizations with SparseTensorrelated optimizations. This is a careful first start by adding s
[mlir][linalg][sparse] add linalg optimization passes "upstream"It is time to compose Linalg related optimizations with SparseTensorrelated optimizations. This is a careful first start by adding somegeneral Linalg optimizations "upstream" of the sparse compiler in thefull sparse compiler pipeline. Some minor changes were needed to makethose optimizations aware of sparsity.Note that after this, we will add a sparse specific fusion rule,just to demonstrate the power of the new composition.Reviewed By: bixiaDifferential Revision: https://reviews.llvm.org/D119971
[NFC] Correct typo `interger` to `integer`
[mlir][sparse] Updating sparse-compiler pipeline for python usageExplicitly nests passes for FuncOp, adds more options to the sparse-compiler pipeline, and updates python integration tests. This s
[mlir][sparse] Updating sparse-compiler pipeline for python usageExplicitly nests passes for FuncOp, adds more options to the sparse-compiler pipeline, and updates python integration tests. This should be sufficient to close https://github.com/llvm/llvm-project/issues/51751Reviewed By: aartbikDifferential Revision: https://reviews.llvm.org/D118658
[mlir] Move SelectOp from Standard to ArithmeticThis is part of splitting up the standard dialect. See https://llvm.discourse.group/t/standard-dialect-the-final-chapter/ for discussion.Differenti
[mlir] Move SelectOp from Standard to ArithmeticThis is part of splitting up the standard dialect. See https://llvm.discourse.group/t/standard-dialect-the-final-chapter/ for discussion.Differential Revision: https://reviews.llvm.org/D118648
[mlir][bufferize] Merge tensor-constant-bufferize into arith-bufferizeThe bufferization of arith.constant ops is also switched over to BufferizableOpInterface-based bufferization. The old implement
[mlir][bufferize] Merge tensor-constant-bufferize into arith-bufferizeThe bufferization of arith.constant ops is also switched over to BufferizableOpInterface-based bufferization. The old implementation is deleted. Both implementations utilize GlobalCreator, now renamed to just `getGlobalFor`.GlobalCreator no longer maintains a set of all created allocations to avoid duplicate allocations of the same constant. Instead, `getGlobalFor` scans the module to see if there is already a global allocation with the same constant value.For compatibility reasons, it is still possible to create a pass that bufferizes only `arith.constant`. This pass (createConstantBufferizePass) could be deleted once all users were switched over to One-Shot bufferization.Differential Revision: https://reviews.llvm.org/D118483
[mlir][sparse] integration test for sparse output operationReviewed By: bixiaDifferential Revision: https://reviews.llvm.org/D118079
[mlir][sparse] python driven test for SDDMMexplores various sparsity combinations ofthe SDMM kernel and verifies that the computedresult is the same for all casesReviewed By: bixiaDifferentia
[mlir][sparse] python driven test for SDDMMexplores various sparsity combinations ofthe SDMM kernel and verifies that the computedresult is the same for all casesReviewed By: bixiaDifferential Revision: https://reviews.llvm.org/D115476
Support sparse tensor output.Add convertFromMLIRSparseTensor to the supporting C shared library to convertSparseTensorStorage to COO-flavor format.Add Python routine sparse_tensor_to_coo_tensor
Support sparse tensor output.Add convertFromMLIRSparseTensor to the supporting C shared library to convertSparseTensorStorage to COO-flavor format.Add Python routine sparse_tensor_to_coo_tensor to convert sparse tensor storagepointer to numpy values for COO-flavor format tensor.Add a Python test for sparse tensor output.Reviewed By: aartbikDifferential Revision: https://reviews.llvm.org/D115557
Avoid unnecessary output buffer allocation and initialization.The sparse tensor code generator allocates memory for the output tensor. Assuch, we only need to allocate a MemRefDescriptor to receiv
Avoid unnecessary output buffer allocation and initialization.The sparse tensor code generator allocates memory for the output tensor. Assuch, we only need to allocate a MemRefDescriptor to receive the output tensorand do not need to allocate and initialize the storage for the tensor.Reviewed By: aartbikDifferential Revision: https://reviews.llvm.org/D115292
[mlir][sparse] Adding a stress testAddresses https://bugs.llvm.org/show_bug.cgi?id=52410Depends on D114192Reviewed By: aartbik, mehdi_aminiDifferential Revision: https://reviews.llvm.org/D1141
[mlir][sparse] Adding a stress testAddresses https://bugs.llvm.org/show_bug.cgi?id=52410Depends on D114192Reviewed By: aartbik, mehdi_aminiDifferential Revision: https://reviews.llvm.org/D114118
[mlir][sparse] Moving integration tests that merely use the Python APIReviewed By: aartbikDifferential Revision: https://reviews.llvm.org/D114192