|
Revision tags: llvmorg-20.1.0, llvmorg-20.1.0-rc3, llvmorg-20.1.0-rc2, llvmorg-20.1.0-rc1, llvmorg-21-init, llvmorg-19.1.7, llvmorg-19.1.6, llvmorg-19.1.5, llvmorg-19.1.4, llvmorg-19.1.3, llvmorg-19.1.2, llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4, llvmorg-19.1.0-rc3, llvmorg-19.1.0-rc2, llvmorg-19.1.0-rc1, llvmorg-20-init, llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5, llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2, llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init, llvmorg-17.0.6, llvmorg-17.0.5, llvmorg-17.0.4, llvmorg-17.0.3, llvmorg-17.0.2, llvmorg-17.0.1, llvmorg-17.0.0, llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2, llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6, llvmorg-16.0.5, llvmorg-16.0.4, llvmorg-16.0.3, llvmorg-16.0.2, llvmorg-16.0.1, llvmorg-16.0.0, llvmorg-16.0.0-rc4, llvmorg-16.0.0-rc3, llvmorg-16.0.0-rc2, llvmorg-16.0.0-rc1, llvmorg-17-init, llvmorg-15.0.7, llvmorg-15.0.6, llvmorg-15.0.5, llvmorg-15.0.4, llvmorg-15.0.3, llvmorg-15.0.2, llvmorg-15.0.1, llvmorg-15.0.0, llvmorg-15.0.0-rc3, llvmorg-15.0.0-rc2, llvmorg-15.0.0-rc1, llvmorg-16-init, llvmorg-14.0.6, llvmorg-14.0.5, llvmorg-14.0.4, llvmorg-14.0.3, llvmorg-14.0.2, llvmorg-14.0.1, llvmorg-14.0.0, llvmorg-14.0.0-rc4, llvmorg-14.0.0-rc3, llvmorg-14.0.0-rc2, llvmorg-14.0.0-rc1, llvmorg-15-init, llvmorg-13.0.1, llvmorg-13.0.1-rc3, llvmorg-13.0.1-rc2 |
|
| #
abb82572 |
| 25-Nov-2021 |
Dmitry Vyukov <[email protected]> |
tsan: optimize __tsan_read/write16
These callbacks are used for SSE vector accesses. In some computational programs these accesses dominate. Currently we do 2 uninlined 8-byte accesses to handle the
tsan: optimize __tsan_read/write16
These callbacks are used for SSE vector accesses. In some computational programs these accesses dominate. Currently we do 2 uninlined 8-byte accesses to handle them. Inline and optimize them similarly to unaligned accesses. This reduces the vector access benchmark time from 8 to 3 seconds.
Depends on D112603.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D114594
show more ...
|
|
Revision tags: llvmorg-13.0.1-rc1, llvmorg-13.0.0, llvmorg-13.0.0-rc4 |
|
| #
a0ed71ff |
| 24-Sep-2021 |
Dmitry Vyukov <[email protected]> |
tsan: make cur_thread_init return cur_thread
Whenever we call cur_thread_init, we call cur_thread on the next line. So make cur_thread_init return the current thread directly. Makes code a bit short
tsan: make cur_thread_init return cur_thread
Whenever we call cur_thread_init, we call cur_thread on the next line. So make cur_thread_init return the current thread directly. Makes code a bit shorter, does not affect codegen.
Reviewed By: vitalybuka, melver
Differential Revision: https://reviews.llvm.org/D110384
show more ...
|
|
Revision tags: llvmorg-13.0.0-rc3, llvmorg-13.0.0-rc2, llvmorg-13.0.0-rc1 |
|
| #
d77b476c |
| 02-Aug-2021 |
Dmitry Vyukov <[email protected]> |
tsan: avoid extra call indirection in unaligned access functions
Currently unaligned access functions are defined in tsan_interface.cpp and do a real call to MemoryAccess. This means we have a real
tsan: avoid extra call indirection in unaligned access functions
Currently unaligned access functions are defined in tsan_interface.cpp and do a real call to MemoryAccess. This means we have a real call and no read/write constant propagation.
Unaligned memory access can be quite hot for some programs (observed on some compression algorithms with ~90% of unaligned accesses).
Move them to tsan_interface_inl.h to avoid the additional call and enable constant propagation. Also reorder the actual store and memory access handling for __sanitizer_unaligned_store callbacks to enable tail calling in MemoryAccess.
Depends on D107282.
Reviewed By: vitalybuka, melver
Differential Revision: https://reviews.llvm.org/D107283
show more ...
|
| #
831910c5 |
| 02-Aug-2021 |
Dmitry Vyukov <[email protected]> |
tsan: new MemoryAccess interface
Currently we have MemoryAccess function that accepts "bool kAccessIsWrite, bool kIsAtomic" and 4 wrappers: MemoryRead/MemoryWrite/MemoryReadAtomic/MemoryWriteAtomic.
tsan: new MemoryAccess interface
Currently we have MemoryAccess function that accepts "bool kAccessIsWrite, bool kIsAtomic" and 4 wrappers: MemoryRead/MemoryWrite/MemoryReadAtomic/MemoryWriteAtomic.
Such scheme with bool flags is not particularly scalable/extendable. Because of that we did not have Read/Write wrappers for UnalignedMemoryAccess, and "true, false" or "false, true" at call sites is not very readable.
Moreover, the new tsan runtime will introduce more flags (e.g. move "freed" and "vptr access" to memory acccess flags). We can't have 16 wrappers and each flag also takes whole 64-bit register for non-inlined calls.
Introduce AccessType enum that contains bit mask of read/write, atomic/non-atomic, and later free/non-free, vptr/non-vptr. Such scheme is more scalable, more readble, more efficient (don't consume multiple registers for these flags during calls) and allows to cover unaligned and range variations of memory access functions as well.
Also switch from size log to just size. The new tsan runtime won't have the limitation of supporting only 1/2/4/8 access sizes, so we don't need the logarithms.
Also add an inline thunk that converts the new interface to the old one. For inlined calls it should not add any overhead because all flags/size can be computed as compile time.
Reviewed By: vitalybuka, melver
Differential Revision: https://reviews.llvm.org/D107276
show more ...
|
|
Revision tags: llvmorg-14-init, llvmorg-12.0.1, llvmorg-12.0.1-rc4, llvmorg-12.0.1-rc3, llvmorg-12.0.1-rc2, llvmorg-12.0.1-rc1, llvmorg-12.0.0, llvmorg-12.0.0-rc5, llvmorg-12.0.0-rc4 |
|
| #
4220531c |
| 15-Mar-2021 |
Daniel Kiss <[email protected]> |
[AArch64][compiler-rt] Strip PAC from the link register.
-mbranch-protection protects the LR on the stack with PAC. When the frames are walked the LR need to be cleared. This inline assembly later w
[AArch64][compiler-rt] Strip PAC from the link register.
-mbranch-protection protects the LR on the stack with PAC. When the frames are walked the LR need to be cleared. This inline assembly later will be replaced with a new builtin.
Test: build with -DCMAKE_C_FLAGS="-mbranch-protection=standard".
Reviewed By: kubamracek
Differential Revision: https://reviews.llvm.org/D98008
show more ...
|
| #
c1940aac |
| 18-Mar-2021 |
Daniel Kiss <[email protected]> |
Revert "[AArch64][compiler-rt] Strip PAC from the link register."
This reverts commit ad40453fc425ee8e1fe43c7bb6e3c1c3afa9cc3b.
|
| #
ad40453f |
| 15-Mar-2021 |
Daniel Kiss <[email protected]> |
[AArch64][compiler-rt] Strip PAC from the link register.
-mbranch-protection protects the LR on the stack with PAC. When the frames are walked the LR need to be cleared. This inline assembly later w
[AArch64][compiler-rt] Strip PAC from the link register.
-mbranch-protection protects the LR on the stack with PAC. When the frames are walked the LR need to be cleared. This inline assembly later will be replaced with a new builtin.
Test: build with -DCMAKE_C_FLAGS="-mbranch-protection=standard".
Reviewed By: kubamracek
Differential Revision: https://reviews.llvm.org/D98008
show more ...
|
|
Revision tags: llvmorg-12.0.0-rc3, llvmorg-12.0.0-rc2, llvmorg-11.1.0, llvmorg-11.1.0-rc3, llvmorg-12.0.0-rc1, llvmorg-13-init, llvmorg-11.1.0-rc2, llvmorg-11.1.0-rc1, llvmorg-11.0.1, llvmorg-11.0.1-rc2, llvmorg-11.0.1-rc1, llvmorg-11.0.0, llvmorg-11.0.0-rc6, llvmorg-11.0.0-rc5, llvmorg-11.0.0-rc4, llvmorg-11.0.0-rc3 |
|
| #
6760f7ee |
| 29-Aug-2020 |
Marco Vanotti <[email protected]> |
[compiler-rt][tsan] Remove unnecesary typedefs
These typedefs are not used anywhere else in this compilation unit.
Differential Revision: https://reviews.llvm.org/D86826
|
| #
e713b0ec |
| 25-Aug-2020 |
Kuba Mracek <[email protected]> |
[tsan] On arm64e, strip out ptrauth bits from incoming PCs
Differential Revision: https://reviews.llvm.org/D86378
|
|
Revision tags: llvmorg-11.0.0-rc2, llvmorg-11.0.0-rc1, llvmorg-12-init, llvmorg-10.0.1, llvmorg-10.0.1-rc4, llvmorg-10.0.1-rc3, llvmorg-10.0.1-rc2, llvmorg-10.0.1-rc1, llvmorg-10.0.0, llvmorg-10.0.0-rc6, llvmorg-10.0.0-rc5, llvmorg-10.0.0-rc4, llvmorg-10.0.0-rc3, llvmorg-10.0.0-rc2, llvmorg-10.0.0-rc1, llvmorg-11-init, llvmorg-9.0.1, llvmorg-9.0.1-rc3, llvmorg-9.0.1-rc2, llvmorg-9.0.1-rc1, llvmorg-9.0.0, llvmorg-9.0.0-rc6, llvmorg-9.0.0-rc5 |
|
| #
c0fa6322 |
| 11-Sep-2019 |
Vitaly Buka <[email protected]> |
Remove NOLINTs from compiler-rt
llvm-svn: 371687
|
|
Revision tags: llvmorg-9.0.0-rc4, llvmorg-9.0.0-rc3, llvmorg-9.0.0-rc2 |
|
| #
5a3bb1a4 |
| 01-Aug-2019 |
Nico Weber <[email protected]> |
compiler-rt: Rename .cc file in lib/tsan/rtl to .cpp
Like r367463, but for tsan/rtl.
llvm-svn: 367564
|