Commit f9a52600 authored by Benoit Lize's avatar Benoit Lize Committed by Commit Bot

[PartitionAlloc] Release the partition lock before OOM.

On some platforms, the OOM path calls free(), which deadlocks as a
consequence, since we are holding the lock from this path.

Release it before OOM.

Bug: 998048
Change-Id: I6aec2b8aa7e4ce64f2b9754284ec8c42ee2bdc2c
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/2410872Reviewed-by: default avatarKentaro Hara <haraken@chromium.org>
Commit-Queue: Benoit L <lizeb@chromium.org>
Cr-Commit-Position: refs/heads/master@{#807371}
parent 838f53a6
......@@ -577,7 +577,30 @@ void* PartitionBucket<thread_safe>::SlowPathAlloc(
if (size > kMaxDirectMapped) {
if (return_null)
return nullptr;
// The lock is here to protect PA from:
// 1. Concurrent calls
// 2. Reentrant calls
//
// This is fine here however, as:
// 1. Concurrency: |PartitionRoot::OutOfMemory()| never returns, so the
// lock will not be re-acquired, which would lead to acting on
// inconsistent data that could have been modified in-between releasing
// and acquiring it.
// 2. Reentrancy: This is why we release the lock. On some platforms,
// terminating the process may free() memory, or even possibly try to
// allocate some. Calling free() is fine, but will deadlock since
// |PartitionRoot::lock_| is not recursive.
//
// Supporting reentrant calls properly is hard, and not a requirement for
// PA. However up to that point, we've only *read* data, not *written* to
// any state. Reentrant calls are then fine, especially as we don't
// continue on this path. The only downside is possibly endless recursion
// if the OOM handler allocates and fails to use UncheckedMalloc() or
// equivalent, but that's violating the contract of
// base::OnNoMemoryInternal().
ScopedUnlockGuard<thread_safe> unlock{root->lock_};
PartitionExcessiveAllocationSize(size);
IMMEDIATE_CRASH(); // Not required, kept as documentation.
}
new_page = PartitionDirectMap(root, flags, size);
if (new_page)
......@@ -636,7 +659,10 @@ void* PartitionBucket<thread_safe>::SlowPathAlloc(
PartitionPage<thread_safe>::get_sentinel_page());
if (return_null)
return nullptr;
// See comment above.
ScopedUnlockGuard<thread_safe> unlock{root->lock_};
root->OutOfMemory(size);
IMMEDIATE_CRASH(); // Not required, kept as documentation.
}
PA_DCHECK(new_page_bucket != get_sentinel_bucket());
......
......@@ -39,6 +39,20 @@ class SCOPED_LOCKABLE ScopedGuard {
MaybeSpinLock<thread_safe>& lock_;
};
template <bool thread_safe>
class SCOPED_LOCKABLE ScopedUnlockGuard {
public:
explicit ScopedUnlockGuard(MaybeSpinLock<thread_safe>& lock)
UNLOCK_FUNCTION(lock)
: lock_(lock) {
lock_.Unlock();
}
~ScopedUnlockGuard() EXCLUSIVE_LOCK_FUNCTION() { lock_.Lock(); }
private:
MaybeSpinLock<thread_safe>& lock_;
};
#if !DCHECK_IS_ON()
// Spinlock. Do not use, to be removed. crbug.com/1061437.
class BASE_EXPORT SpinLock {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment