Go Standard Library Study Notes - Concurrency and Synchronization

Overview

The sync package implements concurrency and synchronization mechanisms, but obviously the topic of concurrent programming is too large to be detailed in a blog, so this article focuses on the use of the sync package.

However, here is a brief introduction to the background of concurrency. In a single-threaded program, there is only one thread accessing data at the same time, and the access is always linear, and no additional mechanism is required; but when there are multiple When threads may access a piece of data at the same time, due to the characteristics of thread scheduling, it will bring unpredictable results. Imagine what the following code will output. Normally it will be 3, but in fact the possible result is 2 or 3. This is because the function or code runs without atomicity. For example, when thread 1 executes the Add1 operation, it is suspended (already Read data, not written back), then start to run thread 2, and read the data, and then the two threads execute in turn to the end, since the values ​​read by the two threads are both data=1, after executing Add1, both get 2 , instead of 3.

import (
      "fmt"
      "time"
)

func Add1(data *int) {
     tmp = *data
     *data = tmp + 1
}

func main(){
     data = 1
     go Add1(&data)
     go Add1(&data)
     time.Sleep(10)
     fmt.Print(data)

}

The essence of concurrent programming is to create a small critical section in the out-of-order code, and the program executes linearly in the critical section to ensure that the execution results of the code meet expectations.

Mutex

sync.Mutex is the implementation of mutex, which is the most classic model in synchronization. Mutex has two methods, Lock and Unlock, which are used to lock and unlock a lock respectively. Each mutex can only be Locked once. Performing the Lock operation on a locked lock will block until the lock is Unlocked. Mutex's statement reads as follows:

type Mutex struct {
        // contains filtered or unexported fields
}

func (m *Mutex) Lock()

func (m *Mutex) Unlock()

For the previous program, you can add locks to ensure concurrency security, and it is recommended to use defer to ensure unlocking

import (
      "fmt"
      "time"
      "sync"
)

func Add1(data *int, mu &sync.Mutex) {
     mu.Lock()
     defer mu.Unlock()
     tmp = *data
     *data = tmp + 1
}

func main(){
      mu := sync.Mutex()
     data = 1
     go Add1(&data, &mu)
     go Add1(&data, &mu)
     time.Sleep(10)
     fmt.Print(data)

}

Read-write lock

In fact, the simple concurrent reading of data will not bring data inconsistency, and the use of mutex locks in this case will bring additional waiting overhead, so in addition to the basic mutex locks, the sync package also provides With RWMutex, unlike Mutex, Mutex can only be locked by one thread at the same time, while RWMutex can be read and locked multiple times, that is, concurrent reading can be performed, see sync.RWMutex for details.

signal

sync.Cond is the implementation of semaphore, which is also a classic synchronization model. From a functional point of view, you can only wait for a semaphore or send a signal to a semaphore. The classic application scenario is the producer-consumer model. When the producer ends a production, it initiates a signal to notify the consumer; and the consumer only needs to wait for this signal.

NewCond

The Cond object is initialized with NewCond and bound to a Locker.

type Cond struct {
        L Locker
}

func NewCond(l Locker) *Cond

Signal&Broadcast

Signal and Broadcast are used to wake up a semaphore. The difference is that Signal will only wake up one thread randomly, while Broadcast will wake up all waiting threads.

func (c *Cond) Signal()

func (c *Cond) Broadcast()

Wait

Blocking wait, wake up when Signal or Broadcast is called, declared as follows:

func (c *Cond) Wait()

WaitGroup

The function of WaitGroup is similar to that of semaphore, but the semaphore only waits for the arrival of a single signal, and the WaitGroup waits for the end of a group of tasks.

Add

To add a value to a WaitGroup, the method declaration is as follows:

func (wg *WaitGroup) Add(delta int)

Done

Each time the Done method is called, the value of the WaitGroup will decrease by 1

func (wg *WaitGroup) Done()

Wait

Block waiting, wake up when the value of WaitGroup is 0

func (wg *WaitGroup) Wait()

Example

The following uses a simple example to illustrate the use of WaitGroup, create a WaitGroup and increase its value by 3, then execute the handle function concurrently, and finally call Wait to wait for the end of execution before exiting the main thread.

package main

import (
    "sync"
    "fmt"
)

func handle(wg *sync.WaitGroup) {
    defer wg.Done()
    fmt.Println("Done Once")
}

func main() {
    wg := sync.WaitGroup{}
    wg.Add(3)
    for i := 0; i < 3; i++ {
        go handle(&wg)
    }

    wg.Wait()
}

Once

sync.Once guarantees that a function has one and only one execution. There is only one method Do, which accepts a parameterless function as a parameter. When you call Do, this function will be executed. The method declaration is as follows:

func (o *Once) Do(f func())

The following code is a simple use of Once, because of the use of Once, "Do Once" will only be printed once, not 10 times

for i := 0; i < 10; i++ {
     sync.Once.Do(func () { fmt.Println("Do Once") }
}

For more GO standard library content, see GITHUB

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325169386&siteId=291194637
Recommended