Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: llvmorg-20.1.0, llvmorg-20.1.0-rc3, llvmorg-20.1.0-rc2, llvmorg-20.1.0-rc1, llvmorg-21-init, llvmorg-19.1.7, llvmorg-19.1.6, llvmorg-19.1.5, llvmorg-19.1.4, llvmorg-19.1.3, llvmorg-19.1.2, llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4, llvmorg-19.1.0-rc3, llvmorg-19.1.0-rc2, llvmorg-19.1.0-rc1, llvmorg-20-init, llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5, llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2, llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init, llvmorg-17.0.6, llvmorg-17.0.5, llvmorg-17.0.4, llvmorg-17.0.3, llvmorg-17.0.2, llvmorg-17.0.1, llvmorg-17.0.0, llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2, llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6, llvmorg-16.0.5, llvmorg-16.0.4, llvmorg-16.0.3, llvmorg-16.0.2, llvmorg-16.0.1, llvmorg-16.0.0, llvmorg-16.0.0-rc4, llvmorg-16.0.0-rc3, llvmorg-16.0.0-rc2, llvmorg-16.0.0-rc1, llvmorg-17-init, llvmorg-15.0.7, llvmorg-15.0.6, llvmorg-15.0.5, llvmorg-15.0.4, llvmorg-15.0.3, llvmorg-15.0.2, llvmorg-15.0.1, llvmorg-15.0.0, llvmorg-15.0.0-rc3, llvmorg-15.0.0-rc2, llvmorg-15.0.0-rc1, llvmorg-16-init
# 27a431f5 19-Jul-2022 Matthias Springer <[email protected]>

[mlir][bufferization][NFC] Move sparse_tensor.release to bufferization dialect

This op used to belong to the sparse dialect, but there are use cases for dense bufferization as well. (E.g., when a te

[mlir][bufferization][NFC] Move sparse_tensor.release to bufferization dialect

This op used to belong to the sparse dialect, but there are use cases for dense bufferization as well. (E.g., when a tensor alloc is returned from a function and should be deallocated at the call site.) This change moves the op to the bufferization dialect, which now has an `alloc_tensor` and a `dealloc_tensor` op.

Differential Revision: https://reviews.llvm.org/D129985

show more ...


# c66303c2 14-Jul-2022 Matthias Springer <[email protected]>

[mlir][sparse] Switch to One-Shot Bufferize

This change removes the partial bufferization passes from the sparse compilation pipeline and replaces them with One-Shot Bufferize. One-Shot Analysis (an

[mlir][sparse] Switch to One-Shot Bufferize

This change removes the partial bufferization passes from the sparse compilation pipeline and replaces them with One-Shot Bufferize. One-Shot Analysis (and TensorCopyInsertion) is used to resolve all out-of-place bufferizations, dense and sparse. Dense ops are then bufferized with BufferizableOpInterface. Sparse ops are still bufferized in the Sparsification pass.

Details:
* Dense allocations are automatically deallocated, unless they are yielded from a block. (In that case the alloc would leak.) All test cases are modified accordingly. E.g., some funcs now have an "out" tensor argument that is returned from the function. (That way, the allocation happens at the call site.)
* Sparse allocations are *not* automatically deallocated. They must be "released" manually. (No change, this will be addressed in a future change.)
* Sparse tensor copies are not supported yet. (Future change)
* Sparsification no longer has to consider inplacability. If necessary, allocations and/or copies are inserted during TensorCopyInsertion. All tensors are inplaceable by the time Sparsification is running. Instead of marking a tensor as "not inplaceable", it can be marked as "not writable", which will trigger an allocation and/or copy during TensorCopyInsertion.

Differential Revision: https://reviews.llvm.org/D129356

show more ...


# 4db52450 27-Jun-2022 Aart Bik <[email protected]>

[mlir][sparse] remove redundant whitespace

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D128673


# 15d1cb45 27-Jun-2022 Peiming Liu <[email protected]>

[mlir][sparse]more integration test cases for sparse_tensor.BinaryOp

Adding more test cases for sparse_tensor.BinaryOp, including different cases when overlap/left/right region is implemented/empty/

[mlir][sparse]more integration test cases for sparse_tensor.BinaryOp

Adding more test cases for sparse_tensor.BinaryOp, including different cases when overlap/left/right region is implemented/empty/identity

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D128383

show more ...


Revision tags: llvmorg-14.0.6, llvmorg-14.0.5
# 6232a8f3 01-Jun-2022 Matthias Springer <[email protected]>

[mlir][sparse][NFC] Switch InitOp to bufferization::AllocTensorOp

Now that we have an AllocTensorOp (previously InitTensorOp) in the bufferization dialect, the InitOp in the sparse dialect is no lon

[mlir][sparse][NFC] Switch InitOp to bufferization::AllocTensorOp

Now that we have an AllocTensorOp (previously InitTensorOp) in the bufferization dialect, the InitOp in the sparse dialect is no longer needed.

Differential Revision: https://reviews.llvm.org/D126180

show more ...


Revision tags: llvmorg-14.0.4, llvmorg-14.0.3, llvmorg-14.0.2
# a8308020 21-Apr-2022 River Riddle <[email protected]>

[mlir] Remove special case parsing/printing of `func` operations

This was leftover from when the standard dialect was destroyed, and
when FuncOp moved to the func dialect. Now that these transitions

[mlir] Remove special case parsing/printing of `func` operations

This was leftover from when the standard dialect was destroyed, and
when FuncOp moved to the func dialect. Now that these transitions
have settled a bit we can drop these.

Most updates were handled using a simple regex: replace `^( *)func` with `$1func.func`

Differential Revision: https://reviews.llvm.org/D124146

show more ...


# 2c332660 03-May-2022 Jim Kitchen <[email protected]>

[mlir][sparse] Add lowering for unary and binary ops

Adding lowering for Unary and Binary required several changes due to
their unique nature of containing custom code for different "regions"
of the

[mlir][sparse] Add lowering for unary and binary ops

Adding lowering for Unary and Binary required several changes due to
their unique nature of containing custom code for different "regions"
of the sparse structure being operated on. Along with a Kind, a pointer
to the Operation is passed along to be merged once the lattice
structure is figured out.

The original operation is maintained, as it is required for subsequent
lattice decisions. However, sparse_tensor.binary has some branches
are considered as fully handled and therefore are marked with as
kBinaryBranch to distinguish them.

A unique aspect of the custom code is that sometimes the desired result
is no result at all -- i.e. a user wants overlapping sparse entries to
become empty in the output. The solution to this is to return an
uninitialized Value(), which is checked and handled elsewhere in the
code and results in nothing being written to the output tensor for that
case.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D123057

show more ...