While investigating another issue, I noticed that `MaybeReexec()` never
actually "re-executes via `execv()`" anymore. `DyldNeedsEnvVariable()`
only returned true on macOS 10.10 and below.
Usually, I try to avoid "unnecessary" cleanups (it's hard to be certain
that there truly is no fallout), but I decided to do this one because:
* I initially tricked myself into thinking that `MaybeReexec()` was
relevant to my original investigation (instead of being dead code).
* The deleted code itself is quite complicated.
* Over time a few other things were mushed into `MaybeReexec()`:
initializing `MonotonicNanoTime()`, verifying interceptors are
working, and stripping the `DYLD_INSERT_LIBRARIES` env var to avoid
problems when forking.
* This platform-specific thing leaked into `sanitizer_common.h`.
* The `ReexecDisabled()` config nob relies on the "strong overrides weak
pattern", which is now problematic and can be completely removed.
* `ReexecDisabled()` actually hid another issue with interceptors not
working in unit tests. I added an explicit `verify_interceptors`
(defaults to `true`) option instead.
Differential Revision: https://reviews.llvm.org/D129157
While investigating another issue, I noticed that `MaybeReexec()` never
actually "re-executes via `execv()`" anymore. `DyldNeedsEnvVariable()`
only returned true on macOS 10.10 and below.
Usually, I try to avoid "unnecessary" cleanups (it's hard to be certain
that there truly is no fallout), but I decided to do this one because:
* I initially tricked myself into thinking that `MaybeReexec()` was
relevant to my original investigation (instead of being dead code).
* The deleted code itself is quite complicated.
* Over time a few other things were mushed into `MaybeReexec()`:
initializing `MonotonicNanoTime()`, verifying interceptors are
working, and stripping the `DYLD_INSERT_LIBRARIES` env var to avoid
problems when forking.
* This platform-specific thing leaked into `sanitizer_common.h`.
* The `ReexecDisabled()` config nob relies on the "strong overrides weak
pattern", which is now problematic and can be completely removed.
* `ReexecDisabled()` actually hid another issue with interceptors not
working in unit tests. I added an explicit `verify_interceptors`
(defaults to `true`) option instead.
Differential Revision: https://reviews.llvm.org/D129157
Add a missing "#if !SANITIZER_GO" guard for a call to DumpProcessMap
in the Finalize hook (needed to build an updated Go race detector syso
image).
Reviewed By: dvyukov
Differential Revision: https://reviews.llvm.org/D128641
This is a follow up to [Sanitizers][Darwin] Rename Apple macro SANITIZER_MAC -> SANITIZER_APPLE (D125816)
Performed a global search/replace as in title against LLVM sources
Differential Revision: https://reviews.llvm.org/D126263
from compiler-rt/lib/tsan
[NFC] As part of using inclusive language within the llvm project, this
patch rewords comments to remove sanity check and sanity test.
Reviewed By: dvyukov
Differential Revision: https://reviews.llvm.org/D124390
This is a follow up to 4f3f4d6722
("sanitizer_common: fix __sanitizer_get_module_and_offset_for_pc signature mismatch")
which fixes a similar problem for msan build.
I am getting the following error compiling a unit test for code that
uses sanitizer_common headers and googletest transitively includes
sanitizer interface headers:
In file included from third_party/gwp_sanitizers/singlestep_test.cpp:3:
In file included from sanitizer_common/sanitizer_common.h:19:
sanitizer_interface_internal.h:41:5: error: typedef redefinition with different types
('struct __sanitizer_sandbox_arguments' vs 'struct __sanitizer_sandbox_arguments')
} __sanitizer_sandbox_arguments;
common_interface_defs.h:39:3: note: previous definition is here
} __sanitizer_sandbox_arguments;
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D119546
Currently we use very common names for macros like ACQUIRE/RELEASE,
which cause conflicts with system headers.
Prefix all macros with SANITIZER_ to avoid conflicts.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D116652
If there are multiple processes, it's hard to understand
what output comes from what process.
VReport prepends pid to the output. Use it.
Depends on D113982.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D113983
Update now after long operations so that we don't use
stale value in subsequent computations.
Depends on D113981.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D113982
Creating threads after a multi-threaded fork is semi-supported,
we don't give particular guarantees, but we try to not fail
on simple cases and we have die_after_fork=0 flag that enables
not dying on creation of threads after a multi-threaded fork.
This flag is used in the wild:
23c052e3e3/SConstruct (L3599)
fork_multithreaded.cpp test started hanging in debug mode
after the recent "tsan: fix deadlock during race reporting" commit,
which added proactive ThreadRegistryLock check in SlotLock.
But the test broke earlier after "tsan: remove quadratic behavior in pthread_join"
commit which made tracking of alive threads based on pthread_t stricter
(CHECK-fail on 2 threads with the same pthread_t, or joining a non-existent thread).
When we start a thread after a multi-threaded fork, the new pthread_t
can actually match one of existing values (for threads that don't exist anymore).
Thread creation started CHECK-failing on this, but the test simply
ignored this CHECK failure in the child thread and "passed".
But after "tsan: fix deadlock during race reporting" the test started hanging dead,
because CHECK failures recursively lock thread registry.
Fix this purging all alive threads from thread registry on fork.
Also the thread registry mutex somehow lost the internal deadlock detector id
and was excluded from deadlock detection. If it would have the id, the CHECK
wouldn't hang because of the nested CHECK failure due to the deadlock.
But then again the test would have silently ignore this error as well
and the bugs wouldn't have been noticed.
Add the deadlock detector id to the thread registry mutex.
Also extend the test to check more cases and detect more bugs.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D116091
There is a small chance that the slot may be not queued in TraceSwitchPart.
This can happen if the slot has kEpochLast epoch and another thread
in FindSlotAndLock discovered that it's exhausted and removed it from
the slot queue. kEpochLast can happen in 2 cases: (1) if TraceSwitchPart
was called with the slot locked and epoch already at kEpochLast,
or (2) if we've acquired a new slot in SlotLock in the beginning
of the function and the slot was at kEpochLast - 1, so after increment
in SlotAttachAndLock it become kEpochLast.
If this happens we crash on ctx->slot_queue.Remove(thr->slot).
Skip the requeueing if the slot is not queued.
The slot is exhausted, so it must not be ctx->slot_queue.
The existing stress test triggers this with very small probability.
I am not sure how to make this condition more likely to be triggered,
it evaded lots of testing.
Depends on D116040.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D116041
SlotPairLocker calls SlotLock under ctx->multi_slot_mtx.
SlotLock can invoke global reset DoReset if we are out of slots/epochs.
But DoReset locks ctx->multi_slot_mtx as well, which leads to deadlock.
Resolve the deadlock by removing SlotPairLocker/multi_slot_mtx
and only lock one slot for which we will do RestoreStack.
We need to lock that slot because RestoreStack accesses the slot journal.
But it's unclear why we need to lock the current slot.
Initially I did it just to be on the safer side (but at that time
we dit not lock the second slot, so it was easy just to lock the current slot).
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D116040
This change switches tsan to the new runtime which features:
- 2x smaller shadow memory (2x of app memory)
- faster fully vectorized race detection
- small fixed-size vector clocks (512b)
- fast vectorized vector clock operations
- unlimited number of alive threads/goroutimes
Depends on D112602.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D112603
This change switches tsan to the new runtime which features:
- 2x smaller shadow memory (2x of app memory)
- faster fully vectorized race detection
- small fixed-size vector clocks (512b)
- fast vectorized vector clock operations
- unlimited number of alive threads/goroutimes
Depends on D112602.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D112603
We call UnmapShadow before the actual munmap, at that point we don't yet
know if the provided address/size are sane. We can't call UnmapShadow
after the actual munmap becuase at that point the memory range can
already be reused for something else, so we can't rely on the munmap
return value to understand is the values are sane.
While calling munmap with insane values (non-canonical address, negative
size, etc) is an error, the kernel won't crash. We must also try to not
crash as the failure mode is very confusing (paging fault inside of the
runtime on some derived shadow address).
Such invalid arguments are observed on Chromium tests:
https://bugs.chromium.org/p/chromium/issues/detail?id=1275581
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D114944
This change switches tsan to the new runtime which features:
- 2x smaller shadow memory (2x of app memory)
- faster fully vectorized race detection
- small fixed-size vector clocks (512b)
- fast vectorized vector clock operations
- unlimited number of alive threads/goroutimes
Depends on D112602.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D112603
Linux/fork_deadlock.cpp currently hangs in debug mode in the following stack.
Disable memory access handling in OnUserAlloc/Free around fork.
1 0x000000000042c54b in __sanitizer::internal_sched_yield () at sanitizer_linux.cpp:452
2 0x000000000042da15 in __sanitizer::StaticSpinMutex::LockSlow (this=0x57ef02 <__sanitizer::internal_allocator_cache_mu>) at sanitizer_mutex.cpp:24
3 0x0000000000423927 in __sanitizer::StaticSpinMutex::Lock (this=0x57ef02 <__sanitizer::internal_allocator_cache_mu>) at sanitizer_mutex.h:32
4 0x000000000042354c in __sanitizer::GenericScopedLock<__sanitizer::StaticSpinMutex>::GenericScopedLock (this=this@entry=0x7ffcabfca0b8, mu=0x1) at sanitizer_mutex.h:367
5 0x0000000000423653 in __sanitizer::RawInternalAlloc (size=size@entry=72, cache=cache@entry=0x0, alignment=8, alignment@entry=0) at sanitizer_allocator.cpp:52
6 0x00000000004235e9 in __sanitizer::InternalAlloc (size=size@entry=72, cache=0x1, cache@entry=0x0, alignment=4, alignment@entry=0) at sanitizer_allocator.cpp:86
7 0x000000000043aa15 in __sanitizer::SymbolizedStack::New (addr=4802655) at sanitizer_symbolizer.cpp:45
8 0x000000000043b353 in __sanitizer::Symbolizer::SymbolizePC (this=0x7f578b77a028, addr=4802655) at sanitizer_symbolizer_libcdep.cpp:90
9 0x0000000000439dbe in __sanitizer::(anonymous namespace)::StackTraceTextPrinter::ProcessAddressFrames (this=this@entry=0x7ffcabfca208, pc=4802655) at sanitizer_stacktrace_libcdep.cpp:36
10 0x0000000000439c89 in __sanitizer::StackTrace::PrintTo (this=this@entry=0x7ffcabfca2a0, output=output@entry=0x7ffcabfca260) at sanitizer_stacktrace_libcdep.cpp:109
11 0x0000000000439fe0 in __sanitizer::StackTrace::Print (this=0x18) at sanitizer_stacktrace_libcdep.cpp:132
12 0x0000000000495359 in __sanitizer::PrintMutexPC (pc=4802656) at tsan_rtl.cpp:774
13 0x000000000042e0e4 in __sanitizer::InternalDeadlockDetector::Lock (this=0x7f578b1ca740, type=type@entry=2, pc=pc@entry=4371612) at sanitizer_mutex.cpp:177
14 0x000000000042df65 in __sanitizer::CheckedMutex::LockImpl (this=<optimized out>, pc=4) at sanitizer_mutex.cpp:218
15 0x000000000042bc95 in __sanitizer::CheckedMutex::Lock (this=0x600001000000) at sanitizer_mutex.h:127
16 __sanitizer::Mutex::Lock (this=0x600001000000) at sanitizer_mutex.h:165
17 0x000000000042b49c in __sanitizer::GenericScopedLock<__sanitizer::Mutex>::GenericScopedLock (this=this@entry=0x7ffcabfca370, mu=0x1) at sanitizer_mutex.h:367
18 0x000000000049504f in __tsan::TraceSwitch (thr=0x7f578b1ca980) at tsan_rtl.cpp:656
19 0x000000000049523e in __tsan_trace_switch () at tsan_rtl.cpp:683
20 0x0000000000499862 in __tsan::TraceAddEvent (thr=0x7f578b1ca980, fs=..., typ=__tsan::EventTypeMop, addr=4499472) at tsan_rtl.h:624
21 __tsan::MemoryAccessRange (thr=0x7f578b1ca980, pc=4499472, addr=135257110102784, size=size@entry=16, is_write=true) at tsan_rtl_access.cpp:563
22 0x000000000049853a in __tsan::MemoryRangeFreed (thr=thr@entry=0x7f578b1ca980, pc=pc@entry=4499472, addr=addr@entry=135257110102784, size=16) at tsan_rtl_access.cpp:487
23 0x000000000048f6bf in __tsan::OnUserFree (thr=thr@entry=0x7f578b1ca980, pc=pc@entry=4499472, p=p@entry=135257110102784, write=true) at tsan_mman.cpp:260
24 0x000000000048f61f in __tsan::user_free (thr=thr@entry=0x7f578b1ca980, pc=4499472, p=p@entry=0x7b0400004300, signal=true) at tsan_mman.cpp:213
25 0x000000000044a820 in __interceptor_free (p=0x7b0400004300) at tsan_interceptors_posix.cpp:708
26 0x00000000004ad599 in alloc_free_blocks () at fork_deadlock.cpp:25
27 __tsan_test_only_on_fork () at fork_deadlock.cpp:32
28 0x0000000000494870 in __tsan::ForkBefore (thr=0x7f578b1ca980, pc=pc@entry=4904437) at tsan_rtl.cpp:510
29 0x000000000046fcb4 in syscall_pre_fork (pc=1) at tsan_interceptors_posix.cpp:2577
30 0x000000000046fc9b in __sanitizer_syscall_pre_impl_fork () at sanitizer_common_syscalls.inc:3094
31 0x00000000004ad5f5 in myfork () at syscall.h:9
32 main () at fork_deadlock.cpp:46
Depends on D114595.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D114597
Now that we lock the internal allocator around fork,
it's possible it will create additional deadlocks.
Add a fake mutex that substitutes the internal allocator
for the purposes of deadlock detection.
Depends on D114531.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D114532
There is a small chance that the internal allocator is locked
during fork and then the new process is created with locked
internal allocator and any attempts to use it will deadlock.
For example, if detected a suppressed race in the parent during fork
and then another suppressed race after the fork.
This becomes much more likely with the new tsan runtime
as it uses the internal allocator for more things.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D114531
This change switches tsan to the new runtime which features:
- 2x smaller shadow memory (2x of app memory)
- faster fully vectorized race detection
- small fixed-size vector clocks (512b)
- fast vectorized vector clock operations
- unlimited number of alive threads/goroutimes
Differential Revision: https://reviews.llvm.org/D112603
This change switches tsan to the new runtime which features:
- 2x smaller shadow memory (2x of app memory)
- faster fully vectorized race detection
- small fixed-size vector clocks (512b)
- fast vectorized vector clock operations
- unlimited number of alive threads/goroutimes
Depends on D112602.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D112603
Add a fork test that models what happens on Mac
where fork calls malloc/free inside of our atfork
callbacks.
Reviewed By: vitalybuka, yln
Differential Revision: https://reviews.llvm.org/D114250
We used to mmap C++ shadow stack as part of the trace region
before ed7f3f5bc9 ("tsan: move shadow stack into ThreadState"),
which moved the shadow stack into TLS. This started causing
timeouts and OOMs on some of our internal tests that repeatedly
create and destroy thousands of threads.
Allocate C++ shadow stack with mmap and small pages again.
This prevents the observed timeouts and OOMs.
But we now need to be more careful with interceptors that
run after thread finalization because FuncEntry/Exit and
TraceAddEvent all need the shadow stack.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D113786
This change switches tsan to the new runtime which features:
- 2x smaller shadow memory (2x of app memory)
- faster fully vectorized race detection
- small fixed-size vector clocks (512b)
- fast vectorized vector clock operations
- unlimited number of alive threads/goroutimes
Depends on D112602.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D112603
Start the background thread only after fork, but not after clone.
For fork we did this always and it's known to work (or user code has adopted).
But if we do this for the new clone interceptor some code (sandbox2) fails.
So model we used to do for years and don't start the background thread after clone.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D113744
tsan_rtl.cpp is huge and does lots of things.
Move everything related to memory access and tracing
to a separate tsan_rtl_access.cpp file.
No functional changes, only code movement.
Reviewed By: vitalybuka, melver
Differential Revision: https://reviews.llvm.org/D112625
Whenever we call cur_thread_init, we call cur_thread on the next line.
So make cur_thread_init return the current thread directly.
Makes code a bit shorter, does not affect codegen.
Reviewed By: vitalybuka, melver
Differential Revision: https://reviews.llvm.org/D110384
Commit 354ded67b3 ("tsan: align ThreadState to cache line")
did an incomplete thing. It marked ThreadState as cache line
aligned, but the thread local ThreadState instance is declared
as an aligned char array with hard-coded 64-byte alignment.
On PowerPC cache line size is 128 bytes, so the hard-coded
64-byte alignment is not enough.
Use cache line alignment consistently.
Differential Revision: https://reviews.llvm.org/D110629
There are 2 reasons to do this:
1. We place hot data in the first cache line of ThreadState,
this assumed that it's cache-line-aligned but we never actually
enforced it (or it was lost at some point).
2. The new vector clock uses vector instructions and requires
data alignment. Later the new vector clock will be embedded in
ThreadState, then ensuring vector clock alignment will be
impossible w/o ThreadState alignment.
Depends on D110519.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D110520
Currently the shadow stack is located in the trace memory mapping.
The new tsan runtime will remove the trace memory mapping.
Move the shadow stack into ThreadState as a preparation step.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D110519
Remove nmissed_expected variable.
It's a leftover from removed "expected race" feature and is never incremented.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D110321
Write uptime in real time seconds for every mem profile record.
Uptime is useful to make more sense out of the profile,
compare random lines, etc.
Depends on D110153.
Reviewed By: melver, vitalybuka
Differential Revision: https://reviews.llvm.org/D110154
We do query it every 100ms now.
(GetRSS was fixed to not be dead slow IIRC)
Depends on D110152.
Reviewed By: melver, vitalybuka
Differential Revision: https://reviews.llvm.org/D110153
BackgroundThread function is quite large,
move mem profile initialization into a separate function.
Depends on D110151.
Reviewed By: melver, vitalybuka
Differential Revision: https://reviews.llvm.org/D110152
We currently query number of threads before reading /proc/self/smaps.
But reading /proc/self/smaps can take lots of time for huge processes
and it's retries several times with different buffer sizes.
Overall it can take tens of seconds. This can make number of threads
significantly inconsistent with the rest of the stats.
So query it after reading /proc/self/smaps.
Depends on D110149.
Reviewed By: melver, vitalybuka
Differential Revision: https://reviews.llvm.org/D110150
dlsym calls into dynamic linker which calls malloc and other things.
It's problematic to do it during the actual exit, because
it can happen from a singal handler or from within the runtime
after we reported the first bug, etc.
See https://github.com/google/sanitizers/issues/1440 for an example
(captured in the added test).
Initialize the callbacks during startup instead.
Depends on D110159.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D110166
Some of the DPrintf's currently produce -Wformat warnings if enabled.
Fix these format strings.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D110131
Add structures for the new trace format,
functions that serialize and add events to the trace
and trace replaying logic.
Differential Revision: https://reviews.llvm.org/D107911