In-depth Golang's Mutex
Basic usage
A critical section can be restricted to be held by only one thread at a time.
- used directly in process structures
lock
,unlock
- Embedded in the structure, and then
mutex
called through the property of the structurelock
,unlock
- Embedded in the structure, but used directly in the resource method that needs to be locked, so that the outside world does not need to pay attention to resource locking
In the process of resource locking, it is easy to appear data race
. At this time, we can use it race detector
and integrate it into continuous integration to reduce the codeBug
look at the implementation
The first version of the mutex
Setting up lock-holding flags flag
and sema
semaphores to control mutual exclusion actually uses CAS instructions to complete atomic calculations.
- Field
key
: is oneflag
, used to identify whether the exclusive lock is held by a certaingoroutine
, ifkey
greater than or equal to 1, indicating that the exclusive lock has been held;key
not only identifies whether the lock isgoroutine
held, but also records the current holding and the number of waiting to acquire the
lockgoroutine
- Field
sema
: It is a semaphore variable, which is used to controlgoroutine
the blocking sleep and wake-up of waiting.
Unlock
The method can be called arbitrarily goroutine
to release the lock, even if it does not hold the mutex goroutine
, it can also perform this operation. This is because the lock itself does not contain information about Mutex
who holds the lock , so it will not be checked. This design has been maintained to this day.goroutine
Unlock
Mutex
Due to the above reasons, it is possible if
to release others in the judgment goroutine
, and the person who releases the lock goroutine
does not have to be the lock holder
func lockTest()
{
lock()
var count
if count {
unlock()
}
// 此处就可能出现 goroutine 释放其他的锁
unlock()
}
Four Common Usage Mistakes
Lock/Unlock does not appear in pairs, omission, accidental deletion
Copy the used Mutex
type Counter struct {
sync.Mutex
Count int
}
func main() {
var c Counter
c.Lock()
defer c.Unlock()
c.Count++
foo(c) // 复制锁
}
// 这里Counter的参数是通过复制的方式传入的
func foo(c Counter) {
c.Lock()
defer c.Unlock()
fmt.Println("in foo")
}
Why can't it be copied?
The reason is that Mutex
is a stateful object, and its state
field records the state of the lock. If you want to copy an already locked Mutex
variable to a new variable, then the newly initialized variable is actually locked, which is obviously not as expected
heavy entry
- Reentrant lock concept explanation
When a thread acquires a lock, if no other thread owns the lock, then the thread has successfully acquired the lock. After that, if other threads request the lock again, they will be blocked. If the thread that owns the lock requests the lock again, it will not be blocked, but will return successfully, so it is called a reentrant lock.
Mutex
not reentrant
Not surprising when you think about it, since Mutex
the implementation of does not record which goroutine
owns the lock. In theory, anyone can lock the lock goroutine
at will , so there is no way to calculate the re-entry conditionUnlock
func foo(l sync.Locker) {
fmt.Println("in foo")
l.Lock()
bar(l)
l.Unlock()
}
// 这就是可重入锁
func bar(l sync.Locker) {
l.Lock()
fmt.Println("in bar")
l.Unlock()
}
func main() {
l := &sync.Mutex{
}
foo(l)
}
Implement reentrant locks yourself
- By goroutine id
// RecursiveMutex 包装一个Mutex,实现可重入
type RecursiveMutex struct {
sync.Mutex
owner int64 // 当前持有锁的goroutine id
recursion int32 // 这个goroutine 重入的次数
}
func (m *RecursiveMutex) Lock() {
gid := goid.Get() // 如果当前持有锁的goroutine就是这次调用的goroutine,说明是重入
if atomic.LoadInt64(&m.owner) == gid {
m.recursion++
return
}
m.Mutex.Lock() // 获得锁的goroutine第一次调用,记录下它的goroutine id,调用次数加1
atomic.StoreInt64(&m.owner, gid)
m.recursion = 1
}
func (m *RecursiveMutex) Unlock() {
gid := goid.Get() // 非持有锁的goroutine尝试释放锁,错误的使用
if atomic.LoadInt64(&m.owner) != gid {
panic(fmt.Sprintf("wrong the owner(%d): %d!", m.owner, gid))
} // 调用次数减1
m.recursion--
if m.recursion != 0 {
// 如果这个goroutine还没有完全释放,则直接返回
return
} // 此goroutine最后一次调用,需要释放锁
atomic.StoreInt64(&m.owner, -1)
m.Mutex.Unlock()
}
One thing to note is that although the owner can call it multiple times Lock
, it must be called the same number of times Unlock
in order to release the lock. This is a reasonable design that can guarantee a one-to-one correspondence Lock
with Unlock
.
- Option 2: token
This is goroutine id
similar to , goroutine id
since it is not exposed, it means that the designer does not want to use this, and this is just an identifier of a reentrant lock. We can customize this identifier, which is provided by the coroutine itself. In the call and, we pass lock
in unlock
a Just generated token
, the logic is the same
deadlock
- Mutex: Exclusive resources
- Loop wait: Form a loop
- Hold and wait: hold and compete with other resources
- Non-alienable: the resource can only be released by the goroutine that holds it
Breaking one or more of the above conditions can remove the deadlock
Extended Mutex
- Implement TryLock
- Get indicators such as the number of waiters
- Implement a thread-safe queue using Mutex
Implementation principle of read-write lock and pit avoidance guide
in the standard library RWMutex
is a reader/writer
mutex. RWMutex
It can only be held by any number of s at a time reader
, or only by a single writer
.
He is based Mutex
on . If you encounter a scenario where you can clearly distinguish reader
and writer
goroutine
, and there are a large number of concurrent reads, a small number of concurrent writes, and strong performance requirements, you can consider using read-write locks RWMutex
instead Mutex
.
Implementation of read-write lock
- Read-preferring: The read-preferred design can provide high concurrency, but it may lead to write starvation under intense competition. This is because, if there are a large number of readers, this design will result in the lock being acquired by the writer only after all the readers have released the lock.
- Write-preferring: The write-preferred design means that if there is already a writer waiting for a lock, it will prevent the new reader who requests the lock from acquiring the lock, so the writer is guaranteed first. Of course, if some readers have already requested the lock, the newly requested writer will also wait for the existing readers to release the lock before acquiring it. Therefore, the priority in the write priority design is for new incoming requests. This design mainly avoids the writer's starvation problem.
- Unspecified priority: This design is relatively simple and does not distinguish between reader and writer priorities. In some scenarios, this unspecified priority design is more effective, because the first type of priority will lead to write starvation, and the second type of priority The level may lead to read starvation. This kind of access without specifying the priority no longer distinguishes between reading and writing. Everyone has the same priority, which solves the problem of starvation.
Go
Designs in the standard library RWMutex
are Write-preferring
schemes. A blocking Lock
call excludes new reader
requests to the lock.
3 stepping points of RWMutex
- not copyable
- Reentrancy leads to deadlock
- Release the unlocked RWMutex
We know that reader
when there is an active , writer
it will wait. If we call the write operation (it will call the Lock method) reader
during the read operation of , then this and will form an interdependent deadlock state. I want to wait for the completion before releasing the lock, but I need this to release the lock before I can continue to execute without blocking. This is a common deadlock scenario for read-write locks.writer
reader
writer
Reader
writer
writer
reader
The third deadlock scenario is more subtle.
When a writer requests a lock, if there are already some active readers, it will wait for these active readers to complete before acquiring the lock. However, if the active readers depend on new readers later, these new readers It will wait for the writer to release the lock before continuing to execute, which forms a circular dependency: writer depends on active reader -> active reader depends on new reader -> new reader depends on writer.