|
Revision tags: llvmorg-20.1.0, llvmorg-20.1.0-rc3, llvmorg-20.1.0-rc2, llvmorg-20.1.0-rc1, llvmorg-21-init, llvmorg-19.1.7, llvmorg-19.1.6, llvmorg-19.1.5, llvmorg-19.1.4, llvmorg-19.1.3, llvmorg-19.1.2, llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4, llvmorg-19.1.0-rc3, llvmorg-19.1.0-rc2, llvmorg-19.1.0-rc1, llvmorg-20-init, llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5, llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2, llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init, llvmorg-17.0.6, llvmorg-17.0.5, llvmorg-17.0.4, llvmorg-17.0.3, llvmorg-17.0.2, llvmorg-17.0.1, llvmorg-17.0.0, llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2, llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6, llvmorg-16.0.5, llvmorg-16.0.4, llvmorg-16.0.3, llvmorg-16.0.2, llvmorg-16.0.1, llvmorg-16.0.0, llvmorg-16.0.0-rc4, llvmorg-16.0.0-rc3, llvmorg-16.0.0-rc2, llvmorg-16.0.0-rc1, llvmorg-17-init, llvmorg-15.0.7, llvmorg-15.0.6, llvmorg-15.0.5, llvmorg-15.0.4, llvmorg-15.0.3, llvmorg-15.0.2, llvmorg-15.0.1, llvmorg-15.0.0, llvmorg-15.0.0-rc3, llvmorg-15.0.0-rc2, llvmorg-15.0.0-rc1, llvmorg-16-init, llvmorg-14.0.6, llvmorg-14.0.5, llvmorg-14.0.4, llvmorg-14.0.3, llvmorg-14.0.2, llvmorg-14.0.1, llvmorg-14.0.0, llvmorg-14.0.0-rc4, llvmorg-14.0.0-rc3, llvmorg-14.0.0-rc2, llvmorg-14.0.0-rc1, llvmorg-15-init |
|
| #
e188aae4 |
| 31-Jan-2022 |
serge-sans-paille <[email protected]> |
Cleanup header dependencies in LLVMCore
Based on the output of include-what-you-use.
This is a big chunk of changes. It is very likely to break downstream code unless they took a lot of care in avo
Cleanup header dependencies in LLVMCore
Based on the output of include-what-you-use.
This is a big chunk of changes. It is very likely to break downstream code unless they took a lot of care in avoiding hidden ehader dependencies, something the LLVM codebase doesn't do that well :-/
I've tried to summarize the biggest change below:
- llvm/include/llvm-c/Core.h: no longer includes llvm-c/ErrorHandling.h - llvm/IR/DIBuilder.h no longer includes llvm/IR/DebugInfo.h - llvm/IR/IRBuilder.h no longer includes llvm/IR/IntrinsicInst.h - llvm/IR/LLVMRemarkStreamer.h no longer includes llvm/Support/ToolOutputFile.h - llvm/IR/LegacyPassManager.h no longer include llvm/Pass.h - llvm/IR/Type.h no longer includes llvm/ADT/SmallPtrSet.h - llvm/IR/PassManager.h no longer includes llvm/Pass.h nor llvm/Support/Debug.h
And the usual count of preprocessed lines: $ clang++ -E -Iinclude -I../llvm/include ../llvm/lib/IR/*.cpp -std=c++14 -fno-rtti -fno-exceptions | wc -l before: 6400831 after: 6189948
200k lines less to process is no that bad ;-)
Discourse thread on the topic: https://llvm.discourse.group/t/include-what-you-use-include-cleanup
Differential Revision: https://reviews.llvm.org/D118652
show more ...
|
|
Revision tags: llvmorg-13.0.1, llvmorg-13.0.1-rc3, llvmorg-13.0.1-rc2 |
|
| #
1d1e29ba |
| 10-Dec-2021 |
Nikita Popov <[email protected]> |
[IR] Extract method to get single GEP index from offset (NFC)
This exposes the core logic of getGEPIndicesForOffset() as a getGEPIndexForOffset() method that only returns a single offset, instead of
[IR] Extract method to get single GEP index from offset (NFC)
This exposes the core logic of getGEPIndicesForOffset() as a getGEPIndexForOffset() method that only returns a single offset, instead of following the whole chain.
show more ...
|
|
Revision tags: llvmorg-13.0.1-rc1 |
|
| #
0fcb16ee |
| 19-Nov-2021 |
Stephen Neuendorffer <[email protected]> |
Allow DataLayout to support arbitrary pointer sizes
Currently, it is impossible to specify a DataLayout with pointer size and index size that is not a whole number of bytes. This patch modifies the
Allow DataLayout to support arbitrary pointer sizes
Currently, it is impossible to specify a DataLayout with pointer size and index size that is not a whole number of bytes. This patch modifies the DataLayout class to accept arbitrary pointer sizes and to store the size as a number of bits, rather than as a number of bytes. Generally speaking, the external interface of the class as used by in-tree architectures remains the same and shouldn't affect the behavior of architecures with pointer sizes equal to a whole number of bytes.
Note the interface of setPointerAlignment has changed and takes a pointer and index size that is a number of bits, rather than a number of bytes.
Patch originally by Ajit Kumar Agarwal
Differential Revision: https://reviews.llvm.org/D114141
show more ...
|
| #
a8c318b5 |
| 23-Oct-2021 |
Nikita Popov <[email protected]> |
[BasicAA] Use index size instead of pointer size
When accumulating the GEP offset in BasicAA, we should use the pointer index size rather than the pointer size.
Differential Revision: https://revie
[BasicAA] Use index size instead of pointer size
When accumulating the GEP offset in BasicAA, we should use the pointer index size rather than the pointer size.
Differential Revision: https://reviews.llvm.org/D112370
show more ...
|
| #
66e06cc8 |
| 27-Sep-2021 |
Michał Górny <[email protected]> |
[llvm] [ADT] Update llvm::Split() per Pavel Labath's suggestions
Optimize the iterator comparison logic to compare Current.data() pointers. Use std::tie for assignments from std::pair. Replace the
[llvm] [ADT] Update llvm::Split() per Pavel Labath's suggestions
Optimize the iterator comparison logic to compare Current.data() pointers. Use std::tie for assignments from std::pair. Replace the custom class with a function returning iterator_range.
Differential Revision: https://reviews.llvm.org/D110535
show more ...
|
| #
05392466 |
| 24-Sep-2021 |
Arthur Eubanks <[email protected]> |
Reland [IR] Increase max alignment to 4GB
Currently the max alignment representable is 1GB, see D108661. Setting the align of an object to 4GB is desirable in some cases to make sure the lower 32 bi
Reland [IR] Increase max alignment to 4GB
Currently the max alignment representable is 1GB, see D108661. Setting the align of an object to 4GB is desirable in some cases to make sure the lower 32 bits are clear which can be used for some optimizations, e.g. https://crbug.com/1016945.
This uses an extra bit in instructions that carry an alignment. We can store 15 bits of "free" information, and with this change some instructions (e.g. AtomicCmpXchgInst) use 14 bits. We can increase the max alignment representable above 4GB (up to 2^62) since we're only using 33 of the 64 values, but I've just limited it to 4GB for now.
The one place we have to update the bitcode format is for the alloca instruction. It stores its alignment into 5 bits of a 32 bit bitfield. I've added another field which is 8 bits and should be future proof for a while. For backward compatibility, we check if the old field has a value and use that, otherwise use the new field.
Updating clang's max allowed alignment will come in a future patch.
Reviewed By: hans
Differential Revision: https://reviews.llvm.org/D110451
show more ...
|
| #
569346f2 |
| 06-Oct-2021 |
Arthur Eubanks <[email protected]> |
Revert "Reland [IR] Increase max alignment to 4GB"
This reverts commit 8d64314ffea55f2ad94c1b489586daa8ce30f451.
|
| #
8d64314f |
| 24-Sep-2021 |
Arthur Eubanks <[email protected]> |
Reland [IR] Increase max alignment to 4GB
Currently the max alignment representable is 1GB, see D108661. Setting the align of an object to 4GB is desirable in some cases to make sure the lower 32 bi
Reland [IR] Increase max alignment to 4GB
Currently the max alignment representable is 1GB, see D108661. Setting the align of an object to 4GB is desirable in some cases to make sure the lower 32 bits are clear which can be used for some optimizations, e.g. https://crbug.com/1016945.
This uses an extra bit in instructions that carry an alignment. We can store 15 bits of "free" information, and with this change some instructions (e.g. AtomicCmpXchgInst) use 14 bits. We can increase the max alignment representable above 4GB (up to 2^62) since we're only using 33 of the 64 values, but I've just limited it to 4GB for now.
The one place we have to update the bitcode format is for the alloca instruction. It stores its alignment into 5 bits of a 32 bit bitfield. I've added another field which is 8 bits and should be future proof for a while. For backward compatibility, we check if the old field has a value and use that, otherwise use the new field.
Updating clang's max allowed alignment will come in a future patch.
Reviewed By: hans
Differential Revision: https://reviews.llvm.org/D110451
show more ...
|
| #
72cf8b60 |
| 06-Oct-2021 |
Arthur Eubanks <[email protected]> |
Revert "[IR] Increase max alignment to 4GB"
This reverts commit df84c1fe78130a86445d57563dea742e1b85156a.
Breaks some bots
|
| #
df84c1fe |
| 24-Sep-2021 |
Arthur Eubanks <[email protected]> |
[IR] Increase max alignment to 4GB
Currently the max alignment representable is 1GB, see D108661. Setting the align of an object to 4GB is desirable in some cases to make sure the lower 32 bits are
[IR] Increase max alignment to 4GB
Currently the max alignment representable is 1GB, see D108661. Setting the align of an object to 4GB is desirable in some cases to make sure the lower 32 bits are clear which can be used for some optimizations, e.g. https://crbug.com/1016945.
This uses an extra bit in instructions that carry an alignment. We can store 15 bits of "free" information, and with this change some instructions (e.g. AtomicCmpXchgInst) use 14 bits. We can increase the max alignment representable above 4GB (up to 2^62) since we're only using 33 of the 64 values, but I've just limited it to 4GB for now.
The one place we have to update the bitcode format is for the alloca instruction. It stores its alignment into 5 bits of a 32 bit bitfield. I've added another field which is 8 bits and should be future proof for a while. For backward compatibility, we check if the old field has a value and use that, otherwise use the new field.
Updating clang's max allowed alignment will come in a future patch.
Reviewed By: hans
Differential Revision: https://reviews.llvm.org/D110451
show more ...
|
| #
5969e574 |
| 24-Sep-2021 |
Nikita Popov <[email protected]> |
[IR] Handle large element size when calculating GEP indices
This is a fix for the issue reported at https://reviews.llvm.org/D110043#3019942: The ElementSize is a uint64_t and as such may be larger
[IR] Handle large element size when calculating GEP indices
This is a fix for the issue reported at https://reviews.llvm.org/D110043#3019942: The ElementSize is a uint64_t and as such may be larger than the index space, or be negative in the index space. This is UB, but shouldn't cause assertion failures.
We address this by detecting whether the size is too large and use a zero index in that case (which is always conservatively correct).
Differential Revision: https://reviews.llvm.org/D110437
show more ...
|
| #
e09a1dc4 |
| 24-Sep-2021 |
Anirudh Prasad <[email protected]> |
[SystemZ][z/OS] Add GOFF Support to the DataLayout
- This patch adds in the GOFF mangling support to the LLVM data layout string. A corresponding additional line has been added into the data layout
[SystemZ][z/OS] Add GOFF Support to the DataLayout
- This patch adds in the GOFF mangling support to the LLVM data layout string. A corresponding additional line has been added into the data layout section in the language reference documentation. - Furthermore, this patch also sets the right data layout string for the z/OS target in the SystemZ backend.
Reviewed By: uweigand, Kai, abhina.sreeskantharajan, MaskRay
Differential Revision: https://reviews.llvm.org/D109362
show more ...
|
|
Revision tags: llvmorg-13.0.0, llvmorg-13.0.0-rc4 |
|
| #
dd022656 |
| 19-Sep-2021 |
Nikita Popov <[email protected]> |
[IR] Add helper to convert offset to GEP indices
We implement logic to convert a byte offset into a sequence of GEP indices for that offset in a number of places. This patch adds a DataLayout::getGE
[IR] Add helper to convert offset to GEP indices
We implement logic to convert a byte offset into a sequence of GEP indices for that offset in a number of places. This patch adds a DataLayout::getGEPIndicesForOffset() method, which implements the core logic. I've updated SROA, ConstantFolding and InstCombine to use it, and there's a few more places where it looks relevant.
Differential Revision: https://reviews.llvm.org/D110043
show more ...
|
|
Revision tags: llvmorg-13.0.0-rc3, llvmorg-13.0.0-rc2, llvmorg-13.0.0-rc1 |
|
| #
bdd55b2f |
| 31-Jul-2021 |
Eli Friedman <[email protected]> |
Fix the default alignment of i1 vectors.
Currently, the default alignment is much larger than the actual size of the vector in memory. Fix this to use a sane default.
For SVE, temporarily remove l
Fix the default alignment of i1 vectors.
Currently, the default alignment is much larger than the actual size of the vector in memory. Fix this to use a sane default.
For SVE, temporarily remove lowering of load/store operations for predicates with less than 16 elements. The layout the backend was assuming for SVE predicates with less than 16 elements doesn't agree with the frontend. More work probably needs to be done here.
This change is, strictly speaking, not backwards-compatible at the bitcode level. But probably nobody is actually depending on that; i1 vectors in memory are rare, and the code that does use them probably ends up forcing the alignment to something sane anyway. If we think this is a concern, I can restrict this to scalable vectors for now (where it's actually causing issues for me at the moment).
Differential Revision: https://reviews.llvm.org/D88994
show more ...
|
|
Revision tags: llvmorg-14-init, llvmorg-12.0.1, llvmorg-12.0.1-rc4, llvmorg-12.0.1-rc3, llvmorg-12.0.1-rc2, llvmorg-12.0.1-rc1, llvmorg-12.0.0, llvmorg-12.0.0-rc5, llvmorg-12.0.0-rc4 |
|
| #
f59ba084 |
| 30-Mar-2021 |
Craig Topper <[email protected]> |
[StructLayout] Use TrailingObjects to allocate space for MemberOffsets.
MemberOffsets are stored at the end of StructLayout. The class contains a single entry array to mark the start of the member o
[StructLayout] Use TrailingObjects to allocate space for MemberOffsets.
MemberOffsets are stored at the end of StructLayout. The class contains a single entry array to mark the start of the member offsets. getStructLayout calculates the additional space needed for additional elements before allocating memory.
This patch converts this to use TrailingObjects. This simplifies the size computation in getStructLayout and gets rid of the single entry array.
This is prep work, but to use TypeSize instead of uint64_t for D98169. The single entry array doesn't work with TypeSize because TypeSize doesn't have a default constructor. We thought this change was an improvement by itself so we've separated it out.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D99608
show more ...
|
|
Revision tags: llvmorg-12.0.0-rc3, llvmorg-12.0.0-rc2, llvmorg-11.1.0, llvmorg-11.1.0-rc3, llvmorg-12.0.0-rc1, llvmorg-13-init, llvmorg-11.1.0-rc2 |
|
| #
cfec6cd5 |
| 18-Jan-2021 |
Craig Topper <[email protected]> |
[IR] Allow scalable vectors in structs to support intrinsics returning multiple values.
RISC-V would like to use a struct of scalable vectors to return multiple values from intrinsics. This woud als
[IR] Allow scalable vectors in structs to support intrinsics returning multiple values.
RISC-V would like to use a struct of scalable vectors to return multiple values from intrinsics. This woud also be needed for target independent intrinsics like llvm.sadd.overflow.
This patch removes the existing restriction for this. I've modified StructType::isSized to consider a struct containing scalable vectors as unsized so the verifier won't allow loads/stores/allocas of these structs.
Reviewed By: sdesmalen
Differential Revision: https://reviews.llvm.org/D94142
show more ...
|
|
Revision tags: llvmorg-11.1.0-rc1, llvmorg-11.0.1, llvmorg-11.0.1-rc2, llvmorg-11.0.1-rc1 |
|
| #
981a0bd8 |
| 20-Nov-2020 |
Luo, Yuanke <[email protected]> |
[X86] Add x86_amx type for intel AMX.
The x86_amx is used for AMX intrisics. <256 x i32> is bitcast to x86_amx when it is used by AMX intrinsics, and x86_amx is bitcast to <256 x i32> when it is use
[X86] Add x86_amx type for intel AMX.
The x86_amx is used for AMX intrisics. <256 x i32> is bitcast to x86_amx when it is used by AMX intrinsics, and x86_amx is bitcast to <256 x i32> when it is used by load/store instruction. So amx intrinsics only operate on type x86_amx. It can help to separate amx intrinsics from llvm IR instructions (+-*/). Thank Craig for the idea. This patch depend on https://reviews.llvm.org/D87981.
Differential Revision: https://reviews.llvm.org/D91927
show more ...
|
| #
b5f23189 |
| 30-Nov-2020 |
Nikita Popov <[email protected]> |
[DL] Inline getAlignmentInfo() implementation (NFC)
Apart from getting the entry in the table (which is already a separate function), the remaining logic is different for all alignment types and is
[DL] Inline getAlignmentInfo() implementation (NFC)
Apart from getting the entry in the table (which is already a separate function), the remaining logic is different for all alignment types and is better combined with getAlignment().
This is a minor efficiency improvement, and should make further improvements like using separate storage for different alignment types simpler.
show more ...
|
| #
891170e8 |
| 29-Nov-2020 |
Nikita Popov <[email protected]> |
[DL] Optimize address space zero lookup (NFC)
Information for pointer size/alignment/etc is queried a lot, but the binary search based implementation makes this fairly slow.
Add an explicit check f
[DL] Optimize address space zero lookup (NFC)
Information for pointer size/alignment/etc is queried a lot, but the binary search based implementation makes this fairly slow.
Add an explicit check for address space zero and skip the search in that case -- we need to specially handle the zero address space anyway, as it serves as the fallback for all address spaces that were not explicitly defined.
I initially wanted to simply replace the binary search with a linear search, which would handle both address space zero and the general case efficiently, but I was not sure whether there are any degenerate targets that use more than a handful of declared address spaces (in-tree, even AMDGPU only declares six).
show more ...
|
| #
3bc41575 |
| 20-Nov-2020 |
Alex Richardson <[email protected]> |
Add a default address space for globals to DataLayout
This is similar to the existing alloca and program address spaces (D37052) and should be used when creating/accessing global variables. We need
Add a default address space for globals to DataLayout
This is similar to the existing alloca and program address spaces (D37052) and should be used when creating/accessing global variables. We need this in our CHERI fork of LLVM to place all globals in address space 200. This ensures that values are accessed using CHERI load/store instructions instead of the normal MIPS/RISC-V ones.
The problem this is trying to fix is that most of the time the type of globals is created using a simple PointerType::getUnqual() (or ::get() with the default address-space value of 0). This does not work for us and we get assertion/compilation/instruction selection failures whenever a new call is added that uses the default value of zero.
In our fork we have removed the default parameter value of zero for most address space arguments and use DL.getProgramAddressSpace() or DL.getGlobalsAddressSpace() whenever possible. If this change is accepted, I will upstream follow-up patches to use DL.getGlobalsAddressSpace() instead of relying on the default value of 0 for PointerType::get(), etc.
This patch and the follow-up changes will not have any functional changes for existing backends with the default globals address space of zero. A follow-up commit will change the default globals address space for AMDGPU to 1.
Reviewed By: dylanmckay
Differential Revision: https://reviews.llvm.org/D70947
show more ...
|
|
Revision tags: llvmorg-11.0.0, llvmorg-11.0.0-rc6, llvmorg-11.0.0-rc5, llvmorg-11.0.0-rc4, llvmorg-11.0.0-rc3, llvmorg-11.0.0-rc2 |
|
| #
f4257c58 |
| 14-Aug-2020 |
David Sherwood <[email protected]> |
[SVE] Make ElementCount members private
This patch changes ElementCount so that the Min and Scalable members are now private and can only be accessed via the get functions getKnownMinValue() and isS
[SVE] Make ElementCount members private
This patch changes ElementCount so that the Min and Scalable members are now private and can only be accessed via the get functions getKnownMinValue() and isScalable(). In addition I've added some other member functions for more commonly used operations. Hopefully this makes the class more useful and will reduce the need for calling getKnownMinValue().
Differential Revision: https://reviews.llvm.org/D86065
show more ...
|
| #
874aef87 |
| 17-Aug-2020 |
Alex Zinenko <[email protected]> |
[llvm] support graceful failure of DataLayout parsing
Existing implementation always aborts on syntax errors in a DataLayout description. While this is meaningful for consuming textual IR modules, i
[llvm] support graceful failure of DataLayout parsing
Existing implementation always aborts on syntax errors in a DataLayout description. While this is meaningful for consuming textual IR modules, it is inconvenient for users that may need fine-grained control over the layout from, e.g., command-line options. Propagate errors through the parsing functions and only abort in the top-level parsing function instead.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D85650
show more ...
|
|
Revision tags: llvmorg-11.0.0-rc1, llvmorg-12-init, llvmorg-10.0.1, llvmorg-10.0.1-rc4, llvmorg-10.0.1-rc3 |
|
| #
572dde55 |
| 02-Jul-2020 |
jasonliu <[email protected]> |
[XCOFF][AIX] Use 'L..' instead of '.L' for getPrivateGlobalPrefix in DataLayout
Summary: D80831 changed part of the prefix usage for AIX. But there are other places getting prefix from DataLayout. T
[XCOFF][AIX] Use 'L..' instead of '.L' for getPrivateGlobalPrefix in DataLayout
Summary: D80831 changed part of the prefix usage for AIX. But there are other places getting prefix from DataLayout. This patch intends to make prefix usage consistent on AIX.
Reviewed by: hubert.reinterpretcast, daltenty
Differential Revision: https://reviews.llvm.org/D81270
show more ...
|
| #
d3085c25 |
| 01-Jul-2020 |
Guillaume Chatelet <[email protected]> |
[Alignment][NFC] Transition and simplify calls to DL::getABITypeAlignment
This patch is part of a series to introduce an Alignment type. See this thread for context: http://lists.llvm.org/pipermail/
[Alignment][NFC] Transition and simplify calls to DL::getABITypeAlignment
This patch is part of a series to introduce an Alignment type. See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html See this patch for the introduction of the type: https://reviews.llvm.org/D64790
Differential Revision: https://reviews.llvm.org/D82956
show more ...
|
| #
368a5e3a |
| 29-Jun-2020 |
Guillaume Chatelet <[email protected]> |
[Alignment][NFC] migrate DataLayout::getPreferredAlignment
This patch is part of a series to introduce an Alignment type. See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-J
[Alignment][NFC] migrate DataLayout::getPreferredAlignment
This patch is part of a series to introduce an Alignment type. See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html See this patch for the introduction of the type: https://reviews.llvm.org/D64790
Differential Revision: https://reviews.llvm.org/D82752
show more ...
|