History log of /llvm-project-15.0.7/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp (Results 1 – 25 of 48)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: llvmorg-20.1.0, llvmorg-20.1.0-rc3, llvmorg-20.1.0-rc2, llvmorg-20.1.0-rc1, llvmorg-21-init, llvmorg-19.1.7, llvmorg-19.1.6, llvmorg-19.1.5, llvmorg-19.1.4, llvmorg-19.1.3, llvmorg-19.1.2, llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4, llvmorg-19.1.0-rc3, llvmorg-19.1.0-rc2, llvmorg-19.1.0-rc1, llvmorg-20-init, llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5, llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2, llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init, llvmorg-17.0.6, llvmorg-17.0.5, llvmorg-17.0.4, llvmorg-17.0.3, llvmorg-17.0.2, llvmorg-17.0.1, llvmorg-17.0.0, llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2, llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6, llvmorg-16.0.5, llvmorg-16.0.4, llvmorg-16.0.3, llvmorg-16.0.2, llvmorg-16.0.1, llvmorg-16.0.0, llvmorg-16.0.0-rc4, llvmorg-16.0.0-rc3, llvmorg-16.0.0-rc2, llvmorg-16.0.0-rc1, llvmorg-17-init, llvmorg-15.0.7, llvmorg-15.0.6, llvmorg-15.0.5, llvmorg-15.0.4, llvmorg-15.0.3, llvmorg-15.0.2, llvmorg-15.0.1, llvmorg-15.0.0, llvmorg-15.0.0-rc3, llvmorg-15.0.0-rc2, llvmorg-15.0.0-rc1, llvmorg-16-init
# 2789c4f5 26-Jul-2022 Kazu Hirata <[email protected]>

[mlir] Use value_or (NFC)


# a8601f11 25-Jul-2022 Michele Scuttari <[email protected]>

[MLIR] Generic 'malloc', 'aligned_alloc' and 'free' functions

When converted to the LLVM dialect, the memref.alloc and memref.free operations were generating calls to hardcoded 'malloc' and 'free' f

[MLIR] Generic 'malloc', 'aligned_alloc' and 'free' functions

When converted to the LLVM dialect, the memref.alloc and memref.free operations were generating calls to hardcoded 'malloc' and 'free' functions. This didn't leave any freedom to users to provide their custom implementation. Those operations now convert into calls to '_mlir_alloc' and '_mlir_free' functions, which have also been implemented into the runtime support library as wrappers to 'malloc' and 'free'. The same has been done for the 'aligned_alloc' function.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D128791

show more ...


# 380a1b20 23-Jul-2022 Kazu Hirata <[email protected]>

Use callables directly in any_of, count_if, etc (NFC)


# d4217e6c 17-Jul-2022 Ivan Butygin <[email protected]>

[mlir][memref] Missing type conversion in memref.reshape llvm lowering

Shape can be memref of index type, so memref::LoadOp result need to be converted into llvm type.

Differential Revision: https:

[mlir][memref] Missing type conversion in memref.reshape llvm lowering

Shape can be memref of index type, so memref::LoadOp result need to be converted into llvm type.

Differential Revision: https://reviews.llvm.org/D129965

show more ...


# d04c2b2f 18-Jul-2022 Mehdi Amini <[email protected]>

Revert "[MLIR] Generic 'malloc', 'aligned_alloc' and 'free' functions"

This reverts commit 3e21fb616d9a1b29bf9d1a1ba484add633d6d5b3.

A lot of integration tests are failing on the bot.


# 3e21fb61 18-Jul-2022 Michele Scuttari <[email protected]>

[MLIR] Generic 'malloc', 'aligned_alloc' and 'free' functions

When converted to the LLVM dialect, the memref.alloc and memref.free operations were generating calls to hardcoded 'malloc' and 'free' f

[MLIR] Generic 'malloc', 'aligned_alloc' and 'free' functions

When converted to the LLVM dialect, the memref.alloc and memref.free operations were generating calls to hardcoded 'malloc' and 'free' functions. This didn't leave any freedom to users to provide their custom implementation. Those operations now convert into calls to '_mlir_alloc' and '_mlir_free' functions, which have also been implemented into the runtime support library as wrappers to 'malloc' and 'free'. The same has been done for the 'aligned_alloc' function.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D128791

show more ...


# 136d746e 11-Jul-2022 Jacques Pienaar <[email protected]>

[mlir] Flip accessors to prefixed form (NFC)

Another mechanical sweep to keep diff small for flip to _Prefixed.


Revision tags: llvmorg-14.0.6
# 6d5fc1e3 21-Jun-2022 Kazu Hirata <[email protected]>

[mlir] Don't use Optional::getValue (NFC)


# 30c67587 19-Jun-2022 Kazu Hirata <[email protected]>

Use value_or instead of getValueOr (NFC)


Revision tags: llvmorg-14.0.5
# 5fee1799 29-May-2022 Ashay Rane <[email protected]>

[mlir] translate memref.reshape with static shapes but dynamic dims

Prior to this patch, the lowering of memref.reshape operations to the
LLVM dialect failed if the shape argument had a static shape

[mlir] translate memref.reshape with static shapes but dynamic dims

Prior to this patch, the lowering of memref.reshape operations to the
LLVM dialect failed if the shape argument had a static shape with
dynamic dimensions. This patch adds the necessary support so that when
the shape argument has dynamic values, the lowering probes the dimension
at runtime to set the size in the `MemRefDescriptor` type. This patch
also computes the stride for dynamic dimensions by deriving it from the
sizes of the inner dimensions.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D126604

show more ...


Revision tags: llvmorg-14.0.4
# 705f048c 20-May-2022 Eugene Zhulenev <[email protected]>

[mlir] MemRefToLLVM: convert memref.view operations for empty memrefs

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D126094


# 5380e30e 05-May-2022 Ashay Rane <[email protected]>

[mlir] translate memref.reshape ops that have static shapes

This patch references code for translating memref.reinterpret_cast ops
to add translation rules for memref.reshape ops that have a static

[mlir] translate memref.reshape ops that have static shapes

This patch references code for translating memref.reinterpret_cast ops
to add translation rules for memref.reshape ops that have a static shape
argument. Since reshape ops don't have offsets, sizes, or strides, this
patch simply sets the allocated and aligned pointers of the MemRef
descriptor.

Reviewed By: ftynse, cathyzhyi

Differential Revision: https://reviews.llvm.org/D125039

show more ...


Revision tags: llvmorg-14.0.3, llvmorg-14.0.2
# e1318078 19-Apr-2022 Yi Zhang <[email protected]>

Support non identity layout map for reshape ops in MemRefToLLVM lowering

This change borrows the ideas from `computeExpanded/CollapsedLayoutMap`
and computes the dynamic strides at runtime for the m

Support non identity layout map for reshape ops in MemRefToLLVM lowering

This change borrows the ideas from `computeExpanded/CollapsedLayoutMap`
and computes the dynamic strides at runtime for the memref descriptors.

Differential Revision: https://reviews.llvm.org/D124001

show more ...


# eda6f907 22-Apr-2022 River Riddle <[email protected]>

[mlir][NFC] Shift a bunch of dialect includes from the .h to the .cpp

Now that dialect constructors are generated in the .cpp file, we can
drop all of the dependent dialect includes from the .h file

[mlir][NFC] Shift a bunch of dialect includes from the .h to the .cpp

Now that dialect constructors are generated in the .cpp file, we can
drop all of the dependent dialect includes from the .h file.

Differential Revision: https://reviews.llvm.org/D124298

show more ...


# 58ceae95 18-Apr-2022 River Riddle <[email protected]>

[mlir:NFC] Remove the forward declaration of FuncOp in the mlir namespace

FuncOp has been moved to the `func` namespace for a little over a month, the
using directive can be dropped now.


Revision tags: llvmorg-14.0.1, llvmorg-14.0.0, llvmorg-14.0.0-rc4, llvmorg-14.0.0-rc3
# 36550692 08-Mar-2022 River Riddle <[email protected]>

[mlir] Move the Builtin FuncOp to the Func dialect

This commit moves FuncOp out of the builtin dialect, and into the Func
dialect. This move has been planned in some capacity from the moment
we made

[mlir] Move the Builtin FuncOp to the Func dialect

This commit moves FuncOp out of the builtin dialect, and into the Func
dialect. This move has been planned in some capacity from the moment
we made FuncOp an operation (years ago). This commit handles the
functional aspects of the move, but various aspects are left untouched
to ease migration: func::FuncOp is re-exported into mlir to reduce
the actual API churn, the assembly format still accepts the unqualified
`func`. These temporary measures will remain for a little while to
simplify migration before being removed.

Differential Revision: https://reviews.llvm.org/D121266

show more ...


Revision tags: llvmorg-14.0.0-rc2
# 23aa5a74 26-Feb-2022 River Riddle <[email protected]>

[mlir] Rename the Standard dialect to the Func dialect

The last remaining operations in the standard dialect all revolve around
FuncOp/function related constructs. This patch simply handles the init

[mlir] Rename the Standard dialect to the Func dialect

The last remaining operations in the standard dialect all revolve around
FuncOp/function related constructs. This patch simply handles the initial
renaming (which by itself is already huge), but there are a large number
of cleanups unlocked/necessary afterwards:

* Removing a bunch of unnecessary dependencies on Func
* Cleaning up the From/ToStandard conversion passes
* Preparing for the move of FuncOp to the Func dialect

See the discussion at https://discourse.llvm.org/t/standard-dialect-the-final-chapter/6061

Differential Revision: https://reviews.llvm.org/D120624

show more ...


# 27cd2a62 16-Feb-2022 Benjamin Kramer <[email protected]>

[mlir][MemRef] Lower memref.copy with an offset to memcpy

memcpy can handle them as long as they're contiguous.

Differential Revision: https://reviews.llvm.org/D119938


# 2219f9f5 11-Feb-2022 Adrian Kuegel <[email protected]>

[mlir][MemRef] Fix MemRefCopyOpLowering to use correct number of bytes

When lowering to memrefCopy call, the size for i1 type was calculated as 0.
Instead of using getTypeSizeInBits() and dividing b

[mlir][MemRef] Fix MemRefCopyOpLowering to use correct number of bytes

When lowering to memrefCopy call, the size for i1 type was calculated as 0.
Instead of using getTypeSizeInBits() and dividing by 8, we should just use getTypeSize().

Differential Revision: https://reviews.llvm.org/D119540

show more ...


# 5b02a480 11-Feb-2022 Adrian Kuegel <[email protected]>

[mlir][MemRef] Fix MemRefCastOpLowering for 32 bit index type.

The lowering creates llvm.insertvalue with the rank value, so it needs to use
index type instead of 64 bit integer type. Otherwise, we

[mlir][MemRef] Fix MemRefCastOpLowering for 32 bit index type.

The lowering creates llvm.insertvalue with the rank value, so it needs to use
index type instead of 64 bit integer type. Otherwise, we get an error:

llvm.insertvalue' op Type mismatch: cannot insert 'i64' into '!llvm.struct<(i32, ptr<i8>)>'

Differential Revision: https://reviews.llvm.org/D119534

show more ...


Revision tags: llvmorg-14.0.0-rc1
# 6635c12a 06-Feb-2022 Benjamin Kramer <[email protected]>

[mlir] Use SmallBitVector instead of SmallDenseSet for AffineMap::compressSymbols

This is both more efficient and more ergonomic to use, as inverting a
bit vector is trivial while inverting a set is

[mlir] Use SmallBitVector instead of SmallDenseSet for AffineMap::compressSymbols

This is both more efficient and more ergonomic to use, as inverting a
bit vector is trivial while inverting a set is annoying.

Sadly this leaks into a bunch of APIs downstream, so adapt them as well.

This would be NFC, but there is an ordering dependency in MemRefOps's
computeMemRefRankReductionMask. This is now deterministic, previously it
was dependent on SmallDenseSet's unspecified iteration order.

Differential Revision: https://reviews.llvm.org/D119076

show more ...


# ace01605 04-Feb-2022 River Riddle <[email protected]>

[mlir] Split out a new ControlFlow dialect from Standard

This dialect is intended to model lower level/branch based control-flow constructs. The initial set
of operations are: AssertOp, BranchOp, Co

[mlir] Split out a new ControlFlow dialect from Standard

This dialect is intended to model lower level/branch based control-flow constructs. The initial set
of operations are: AssertOp, BranchOp, CondBranchOp, SwitchOp; all split out from the current
standard dialect.

See https://discourse.llvm.org/t/standard-dialect-the-final-chapter/6061

Differential Revision: https://reviews.llvm.org/D118966

show more ...


Revision tags: llvmorg-15-init
# ebc81537 01-Feb-2022 Alexander Belyaev <[email protected]>

Revert "Revert "[mlir] Purge `linalg.copy` and use `memref.copy` instead.""

This reverts commit 25bf6a2a9bc6ecb3792199490c70c4ce50a94aea.


# 632a4f88 26-Jan-2022 River Riddle <[email protected]>

[mlir] Move std.generic_atomic_rmw to the memref dialect

This is part of splitting up the standard dialect. The move makes sense anyways,
given that the memref dialect already holds memref.atomic_rm

[mlir] Move std.generic_atomic_rmw to the memref dialect

This is part of splitting up the standard dialect. The move makes sense anyways,
given that the memref dialect already holds memref.atomic_rmw which is the non-region
sibling operation of std.generic_atomic_rmw (the relationship is even more clear given
they have nearly the same description % how they represent the inner computation).

Differential Revision: https://reviews.llvm.org/D118209

show more ...


Revision tags: llvmorg-13.0.1, llvmorg-13.0.1-rc3
# e084679f 19-Jan-2022 River Riddle <[email protected]>

[mlir] Make locations required when adding/creating block arguments

BlockArguments gained the ability to have locations attached a while ago, but they
have always been optional. This goes against th

[mlir] Make locations required when adding/creating block arguments

BlockArguments gained the ability to have locations attached a while ago, but they
have always been optional. This goes against the core tenant of MLIR where location
information is a requirement, so this commit updates the API to require locations.

Fixes #53279

Differential Revision: https://reviews.llvm.org/D117633

show more ...


12