| #
de36e804 |
| 11-Nov-2014 |
Duncan P. N. Exon Smith <[email protected]> |
Revert "IR: MDNode => Value"
Instead, we're going to separate metadata from the Value hierarchy. See PR21532.
This reverts commit r221375. This reverts commit r221373. This reverts commit r221359.
Revert "IR: MDNode => Value"
Instead, we're going to separate metadata from the Value hierarchy. See PR21532.
This reverts commit r221375. This reverts commit r221373. This reverts commit r221359. This reverts commit r221167. This reverts commit r221027. This reverts commit r221024. This reverts commit r221023. This reverts commit r220995. This reverts commit r220994.
llvm-svn: 221711
show more ...
|
| #
3d5a02f6 |
| 03-Nov-2014 |
Duncan P. N. Exon Smith <[email protected]> |
IR: MDNode => Value: Instruction::getAllMetadataOtherThanDebugLoc()
Change `Instruction::getAllMetadataOtherThanDebugLoc()` from a vector of `MDNode` to one of `Value`. Part of PR21433.
llvm-svn:
IR: MDNode => Value: Instruction::getAllMetadataOtherThanDebugLoc()
Change `Instruction::getAllMetadataOtherThanDebugLoc()` from a vector of `MDNode` to one of `Value`. Part of PR21433.
llvm-svn: 221167
show more ...
|
| #
60d87e72 |
| 21-Oct-2014 |
Duncan P. N. Exon Smith <[email protected]> |
IR: Remove dead code in metadata bitcode writing, NFC
No one cares how many uses each metadata value has, so don't bother counting.
llvm-svn: 220337
|
| #
c00017d1 |
| 15-Oct-2014 |
Sanjay Patel <[email protected]> |
correct const-ness with auto and dyn_cast
1. Use const with autos. 2. Don't bother with explicit const in cast ops because they do it automagically.
Thanks, David B. / Aaron B. / Reid K.
llvm-svn:
correct const-ness with auto and dyn_cast
1. Use const with autos. 2. Don't bother with explicit const in cast ops because they do it automagically.
Thanks, David B. / Aaron B. / Reid K.
llvm-svn: 219817
show more ...
|
| #
473e7fdb |
| 15-Oct-2014 |
Sanjay Patel <[email protected]> |
Use 'auto' for easier reading; no functional change intended.
llvm-svn: 219804
|
|
Revision tags: llvmorg-3.5.0, llvmorg-3.5.0-rc4, llvmorg-3.5.0-rc3, llvmorg-3.5.0-rc2 |
|
| #
1f66c856 |
| 28-Jul-2014 |
Duncan P. N. Exon Smith <[email protected]> |
Bitcode: Serialize (and recover) use-list order
Predict and serialize use-list order in bitcode. This makes the option `-preserve-bc-use-list-order` work *most* of the time, but this is still exper
Bitcode: Serialize (and recover) use-list order
Predict and serialize use-list order in bitcode. This makes the option `-preserve-bc-use-list-order` work *most* of the time, but this is still experimental.
- Builds a full value-table up front in the writer, sets up a list of use-list orders to write out, and discards the table. This is a simpler first step than determining the order from the various overlapping IDs of values on-the-fly.
- The shuffles stored in the use-list order list have an unnecessarily large memory footprint.
- `blockaddress` expressions cause functions to be materialized out-of-order. For now I've ignored this problem, so use-list orders will be wrong for constants used by functions that have block addresses taken. There are a couple of ways to fix this, but I don't have a concrete plan yet.
- When materializing functions lazily, the use-lists for constants will not be correct. This use case is out of scope: what should the use-list order be, if it's incomplete?
This is part of PR5680.
llvm-svn: 214125
show more ...
|
| #
6b6fdc99 |
| 25-Jul-2014 |
Duncan P. N. Exon Smith <[email protected]> |
IPO: Add use-list-order verifier
Add a -verify-use-list-order pass, which shuffles use-list order, writes to bitcode, reads back, and verifies that the (shuffled) order matches.
- The utility fun
IPO: Add use-list-order verifier
Add a -verify-use-list-order pass, which shuffles use-list order, writes to bitcode, reads back, and verifies that the (shuffled) order matches.
- The utility functions live in lib/IR/UseListOrder.cpp.
- Moved (and renamed) the command-line option to enable writing use-lists, so that this pass can return early if the use-list orders aren't being serialized.
It's not clear that this pass is the right direction long-term (perhaps a separate tool instead?), but short-term it's a great way to test the use-list order prototype. I've added an XFAIL-ed testcase that I'm hoping to get working pretty quickly.
This is part of PR5680.
llvm-svn: 213945
show more ...
|
|
Revision tags: llvmorg-3.5.0-rc1 |
|
| #
b0407ba0 |
| 18-Jul-2014 |
Hal Finkel <[email protected]> |
Add a dereferenceable attribute
This attribute indicates that the parameter or return pointer is dereferenceable. Practically speaking, loads from such a pointer within the associated byte range are
Add a dereferenceable attribute
This attribute indicates that the parameter or return pointer is dereferenceable. Practically speaking, loads from such a pointer within the associated byte range are safe to speculatively execute. Such pointer parameters are common in source languages (C++ references, for example).
llvm-svn: 213385
show more ...
|
| #
e15442c8 |
| 18-Jul-2014 |
Hal Finkel <[email protected]> |
Rename AlignAttribute to IntAttribute
Currently the only kind of integer IR attributes that we have are alignment attributes, and so the attribute kind that takes an integer parameter is called Alig
Rename AlignAttribute to IntAttribute
Currently the only kind of integer IR attributes that we have are alignment attributes, and so the attribute kind that takes an integer parameter is called AlignAttr, but that will change (we'll soon be adding a dereferenceable attribute that also takes an integer value). Accordingly, rename AlignAttribute to IntAttribute (class names, enums, etc.).
No functionality change intended.
llvm-svn: 213352
show more ...
|
| #
56b56ea1 |
| 16-Jul-2014 |
Reid Kleckner <[email protected]> |
Roundtrip the inalloca bit on allocas through bitcode
This was an oversight in the original support. As it is, I stuffed this bit into the alignment. The alignment is stored in log2 form, so it do
Roundtrip the inalloca bit on allocas through bitcode
This was an oversight in the original support. As it is, I stuffed this bit into the alignment. The alignment is stored in log2 form, so it doesn't need more than 5 bits, given that Value::MaximumAlignment is 1 << 29.
Reviewers: nicholas
Differential Revision: http://reviews.llvm.org/D3943
llvm-svn: 213118
show more ...
|
| #
dad0a645 |
| 27-Jun-2014 |
David Majnemer <[email protected]> |
IR: Add COMDATs to the IR
This new IR facility allows us to represent the object-file semantic of a COMDAT group.
COMDATs allow us to tie together sections and make the inclusion of one dependent o
IR: Add COMDATs to the IR
This new IR facility allows us to represent the object-file semantic of a COMDAT group.
COMDATs allow us to tie together sections and make the inclusion of one dependent on another. This is required to implement features like MS ABI VFTables and optimizing away certain kinds of initialization in C++.
This functionality is only representable in COFF and ELF, Mach-O has no similar mechanism.
Differential Revision: http://reviews.llvm.org/D4178
llvm-svn: 211920
show more ...
|
| #
420a2168 |
| 13-Jun-2014 |
Tim Northover <[email protected]> |
IR: add "cmpxchg weak" variant to support permitted failure.
This commit adds a weak variant of the cmpxchg operation, as described in C++11. A cmpxchg instruction with this modifier is permitted to
IR: add "cmpxchg weak" variant to support permitted failure.
This commit adds a weak variant of the cmpxchg operation, as described in C++11. A cmpxchg instruction with this modifier is permitted to fail to store, even if the comparison indicated it should.
As a result, cmpxchg instructions must return a flag indicating success in addition to their original iN value loaded. Thus, for uniformity *all* cmpxchg instructions now return "{ iN, i1 }". The second flag is 1 when the store succeeded.
At the DAG level, a new ATOMIC_CMP_SWAP_WITH_SUCCESS node has been added as the natural representation for the new cmpxchg instructions. It is a strong cmpxchg.
By default this gets Expanded to the existing ATOMIC_CMP_SWAP during Legalization, so existing backends should see no change in behaviour. If they wish to deal with the enhanced node instead, they can call setOperationAction on it. Beware: as a node with 2 results, it cannot be selected from TableGen.
Currently, no use is made of the extra information provided in this patch. Test updates are almost entirely adapting the input IR to the new scheme.
Summary for out of tree users: ------------------------------
+ Legacy Bitcode files are upgraded during read. + Legacy assembly IR files will be invalid. + Front-ends must adapt to different type for "cmpxchg". + Backends should be unaffected by default.
llvm-svn: 210903
show more ...
|
| #
42a4c9f9 |
| 06-Jun-2014 |
Rafael Espindola <[email protected]> |
Allow aliases to be unnamed_addr.
Alias with unnamed_addr were in a strange state. It is stored in GlobalValue, the language reference talks about "unnamed_addr aliases" but the verifier was rejecti
Allow aliases to be unnamed_addr.
Alias with unnamed_addr were in a strange state. It is stored in GlobalValue, the language reference talks about "unnamed_addr aliases" but the verifier was rejecting them.
It seems natural to allow unnamed_addr in aliases:
* It is a property of how it is accessed, not of the data itself. * It is perfectly possible to write code that depends on the address of an alias.
This patch then makes unname_addr legal for aliases. One side effect is that the syntax changes for a corner case: In globals, unnamed_addr is now printed before the address space.
llvm-svn: 210302
show more ...
|
| #
44cb65ff |
| 05-Jun-2014 |
Tom Roeder <[email protected]> |
Add a new attribute called 'jumptable' that creates jump-instruction tables for functions marked with this attribute. It includes a pass that rewrites all indirect calls to jumptable functions to pas
Add a new attribute called 'jumptable' that creates jump-instruction tables for functions marked with this attribute. It includes a pass that rewrites all indirect calls to jumptable functions to pass through these tables.
This also adds backend support for generating the jump-instruction tables on ARM and X86. Note that since the jumptable attribute creates a second function pointer for a function, any function marked with jumptable must also be marked with unnamed_addr.
llvm-svn: 210280
show more ...
|
| #
59f7eba2 |
| 28-May-2014 |
Rafael Espindola <[email protected]> |
[pr19844] Add thread local mode to aliases.
This matches gcc's behavior. It also seems natural given that aliases contain other properties that govern how it is accessed (linkage, visibility, dll st
[pr19844] Add thread local mode to aliases.
This matches gcc's behavior. It also seems natural given that aliases contain other properties that govern how it is accessed (linkage, visibility, dll storage).
Clang still has to be updated to expose this feature to C.
llvm-svn: 209759
show more ...
|
| #
acef6c77 |
| 26-May-2014 |
Rafael Espindola <[email protected]> |
Convert a few loops to use ranges.
llvm-svn: 209628
|
| #
d52b1528 |
| 20-May-2014 |
Nick Lewycky <[email protected]> |
Add 'nonnull', a new parameter and return attribute which indicates that the pointer is not null. Instcombine will elide comparisons between these and null. Patch by Luqman Aden!
llvm-svn: 209185
|
|
Revision tags: llvmorg-3.4.2, llvmorg-3.4.2-rc1 |
|
| #
1f10c5ea |
| 01-May-2014 |
Michael J. Spencer <[email protected]> |
[IR] Make {extract,insert}element accept an index of any integer type.
Given the following C code llvm currently generates suboptimal code for x86-64:
__m128 bss4( const __m128 *ptr, size_t i, size
[IR] Make {extract,insert}element accept an index of any integer type.
Given the following C code llvm currently generates suboptimal code for x86-64:
__m128 bss4( const __m128 *ptr, size_t i, size_t j ) { float f = ptr[i][j]; return (__m128) { f, f, f, f }; }
=================================================
define <4 x float> @_Z4bss4PKDv4_fmm(<4 x float>* nocapture readonly %ptr, i64 %i, i64 %j) #0 { %a1 = getelementptr inbounds <4 x float>* %ptr, i64 %i %a2 = load <4 x float>* %a1, align 16, !tbaa !1 %a3 = trunc i64 %j to i32 %a4 = extractelement <4 x float> %a2, i32 %a3 %a5 = insertelement <4 x float> undef, float %a4, i32 0 %a6 = insertelement <4 x float> %a5, float %a4, i32 1 %a7 = insertelement <4 x float> %a6, float %a4, i32 2 %a8 = insertelement <4 x float> %a7, float %a4, i32 3 ret <4 x float> %a8 }
=================================================
shlq $4, %rsi addq %rdi, %rsi movslq %edx, %rax vbroadcastss (%rsi,%rax,4), %xmm0 retq
=================================================
The movslq is uneeded, but is present because of the trunc to i32 and then sext back to i64 that the backend adds for vbroadcastss.
We can't remove it because it changes the meaning. The IR that clang generates is already suboptimal. What clang really should emit is:
%a4 = extractelement <4 x float> %a2, i64 %j
This patch makes that legal. A separate patch will teach clang to do it.
Differential Revision: http://reviews.llvm.org/D3519
llvm-svn: 207801
show more ...
|
|
Revision tags: llvmorg-3.4.1, llvmorg-3.4.1-rc2 |
|
| #
5772b777 |
| 24-Apr-2014 |
Reid Kleckner <[email protected]> |
Add 'musttail' marker to call instructions
This is similar to the 'tail' marker, except that it guarantees that tail call optimization will occur. It also comes with convervative IR verification ru
Add 'musttail' marker to call instructions
This is similar to the 'tail' marker, except that it guarantees that tail call optimization will occur. It also comes with convervative IR verification rules that ensure that tail call optimization is possible.
Reviewers: nicholas
Differential Revision: http://llvm-reviews.chandlerc.com/D3240
llvm-svn: 207143
show more ...
|
| #
2617dcce |
| 15-Apr-2014 |
Craig Topper <[email protected]> |
[C++11] More 'nullptr' conversion. In some cases just using a boolean check instead of comparing to nullptr.
llvm-svn: 206252
|
|
Revision tags: llvmorg-3.4.1-rc1 |
|
| #
2fb5bc33 |
| 13-Mar-2014 |
Rafael Espindola <[email protected]> |
Remove the linker_private and linker_private_weak linkages.
These linkages were introduced some time ago, but it was never very clear what exactly their semantics were or what they should be used fo
Remove the linker_private and linker_private_weak linkages.
These linkages were introduced some time ago, but it was never very clear what exactly their semantics were or what they should be used for. Some investigation found these uses:
* utf-16 strings in clang. * non-unnamed_addr strings produced by the sanitizers.
It turns out they were just working around a more fundamental problem. For some sections a MachO linker needs a symbol in order to split the section into atoms, and llvm had no idea that was the case. I fixed that in r201700 and it is now safe to use the private linkage. When the object ends up in a section that requires symbols, llvm will use a 'l' prefix instead of a 'L' prefix and things just work.
With that, these linkages were already dead, but there was a potential future user in the objc metadata information. I am still looking at CGObjcMac.cpp, but at this point I am convinced that linker_private and linker_private_weak are not what they need.
The objc uses are currently split in
* Regular symbols (no '\01' prefix). LLVM already directly provides whatever semantics they need. * Uses of a private name (start with "\01L" or "\01l") and private linkage. We can drop the "\01L" and "\01l" prefixes as soon as llvm agrees with clang on L being ok or not for a given section. I have two patches in code review for this. * Uses of private name and weak linkage.
The last case is the one that one could think would fit one of these linkages. That is not the case. The semantics are
* the linker will merge these symbol by *name*. * the linker will hide them in the final DSO.
Given that the merging is done by name, any of the private (or internal) linkages would be a bad match. They allow llvm to rename the symbols, and that is really not what we want. From the llvm point of view, these objects should really be (linkonce|weak)(_odr)?.
For now, just keeping the "\01l" prefix is probably the best for these symbols. If we one day want to have a more direct support in llvm, IMHO what we should add is not a linkage, it is just a hidden_symbol attribute. It would be applicable to multiple linkages. For example, on weak it would produce the current behavior we have for objc metadata. On internal, it would be equivalent to private (and we should then remove private).
llvm-svn: 203866
show more ...
|
| #
e94a518a |
| 11-Mar-2014 |
Tim Northover <[email protected]> |
IR: add a second ordering operand to cmpxhg for failure
The syntax for "cmpxchg" should now look something like:
cmpxchg i32* %addr, i32 42, i32 3 acquire monotonic
where the second ordering argu
IR: add a second ordering operand to cmpxhg for failure
The syntax for "cmpxchg" should now look something like:
cmpxchg i32* %addr, i32 42, i32 3 acquire monotonic
where the second ordering argument gives the required semantics in the case that no exchange takes place. It should be no stronger than the first ordering constraint and cannot be either "release" or "acq_rel" (since no store will have taken place).
rdar://problem/15996804
llvm-svn: 203559
show more ...
|
| #
cdf47884 |
| 09-Mar-2014 |
Chandler Carruth <[email protected]> |
[C++11] Add range based accessors for the Use-Def chain of a Value.
This requires a number of steps. 1) Move value_use_iterator into the Value class as an implementation detail 2) Change it to ac
[C++11] Add range based accessors for the Use-Def chain of a Value.
This requires a number of steps. 1) Move value_use_iterator into the Value class as an implementation detail 2) Change it to actually be a *Use* iterator rather than a *User* iterator. 3) Add an adaptor which is a User iterator that always looks through the Use to the User. 4) Wrap these in Value::use_iterator and Value::user_iterator typedefs. 5) Add the range adaptors as Value::uses() and Value::users(). 6) Update *all* of the callers to correctly distinguish between whether they wanted a use_iterator (and to explicitly dig out the User when needed), or a user_iterator which makes the Use itself totally opaque.
Because #6 requires churning essentially everything that walked the Use-Def chains, I went ahead and added all of the range adaptors and switched them to range-based loops where appropriate. Also because the renaming requires at least churning every line of code, it didn't make any sense to split these up into multiple commits -- all of which would touch all of the same lies of code.
The result is still not quite optimal. The Value::use_iterator is a nice regular iterator, but Value::user_iterator is an iterator over User*s rather than over the User objects themselves. As a consequence, it fits a bit awkwardly into the range-based world and it has the weird extra-dereferencing 'operator->' that so many of our iterators have. I think this could be fixed by providing something which transforms a range of T&s into a range of T*s, but that *can* be separated into another patch, and it isn't yet 100% clear whether this is the right move.
However, this change gets us most of the benefit and cleans up a substantial amount of code around Use and User. =]
llvm-svn: 203364
show more ...
|
| #
f863ee29 |
| 25-Feb-2014 |
Rafael Espindola <[email protected]> |
Store a DataLayout in Module.
Now that DataLayout is not a pass, store one in Module.
Since the C API expects to be able to get a char* to the datalayout description, we have to keep a std::string
Store a DataLayout in Module.
Now that DataLayout is not a pass, store one in Module.
Since the C API expects to be able to get a char* to the datalayout description, we have to keep a std::string somewhere. This patch keeps it in Module and also uses it to represent modules without a DataLayout.
Once DataLayout is mandatory, we should probably move the string to DataLayout itself since it won't be necessary anymore to represent the special case of a module without a DataLayout.
llvm-svn: 202190
show more ...
|
| #
7157bb76 |
| 14-Jan-2014 |
Nico Rieck <[email protected]> |
Decouple dllexport/dllimport from linkage
Representing dllexport/dllimport as distinct linkage types prevents using these attributes on templates and inline functions.
Instead of introducing furthe
Decouple dllexport/dllimport from linkage
Representing dllexport/dllimport as distinct linkage types prevents using these attributes on templates and inline functions.
Instead of introducing further mixed linkage types to include linkonce and weak ODR, the old import/export linkage types are replaced with a new separate visibility-like specifier:
define available_externally dllimport void @f() {} @Var = dllexport global i32 1, align 4
Linkage for dllexported globals and functions is now equal to their linkage without dllexport. Imported globals and functions must be either declarations with external linkage, or definitions with AvailableExternallyLinkage.
llvm-svn: 199218
show more ...
|