Go language combat notes (13) | Go concurrent resource competition

If there is concurrency, there will be resource competition. If two or more goroutines access a shared resource without being synchronized with each other, such as reading and writing to the resource at the same time, they will be in a state of competition. This is Resource competition in concurrency.

Concurrency itself is not complicated, but because of the problem of resource competition, it makes it complicated for us to develop good concurrent programs, because it will cause many inexplicable problems.

package main

import (
	"fmt"
	"runtime"
	"sync"
)

var (
	count int32
	wg    sync.WaitGroup
)

func main() {
	wg.Add(2)
	go incCount()
	go incCount()
	wg.Wait()
	fmt.Println(count)
}

func incCount() {
	defer wg.Done()
	for i := 0; i < 2; i++ {
		value := count
		runtime.Gosched()
		value++
		count = value
	}
}

   

This is an example of resource competition. We can run this program several times and find that the result may be 2, or 3, or 4. Because the shared resource countvariable does not have any synchronization protection, both goroutines will read and write them, which will cause the calculated result to be overwritten, so as to produce incorrect results. Here we demonstrate a possibility. We will temporarily call the two goroutines For g1 and g2.

  1. g1 reads that count is 0.
  2. Then g1 pauses, switches to g2 to run, and g2 reads that count is also 0.
  3. g2 pauses, switches to g1, g1 pairs count+1, and count becomes 1.
  4. G1 pauses and switches to g2. G2 has just obtained the value 0, and it is +1. Finally, the value assigned to count is still 1.
  5. Have you noticed that the result of g1 on count+1 was overwritten by g2, and both goroutines are +1 or 1

I won't continue the demonstration anymore. The result is wrong here, and the two goroutines overwrite the result. What we are here runtime.Gosched()is to let the current goroutine pause, return to the execution queue, and let other waiting goroutines run. The purpose is to let us demonstrate that the result of resource competition is more obvious. Note that CPU issues will also be involved here, and multiple cores will be parallel, so the effect of resource competition is more obvious.

Therefore, our reading and writing of the same resource must be atomic, that is, only one goroutine can read and write shared resources at a time .

 

The problem of competition for shared resources is very complex and difficult to detect. Fortunately, Go provides us with a tool to help us check. This is a go build -racecommand. We execute this command in the current project directory to generate an executable file, and then run the executable file to see the printed detection information.

go build -race
   

An additional -raceflag is added so that the generated executable program has its own function of detecting resource competition, and we will run it below, also running in the terminal.

./hello
   

The executable file name generated by my example here is hello, so it is run like this. At this time, let's look at the detection result output by the terminal.

➜  hello ./hello       
==================
WARNING: DATA RACE
Read at 0x0000011a5118 by goroutine 7:
  main.incCount()
      /Users/xxx/code/go/src/flysnow.org/hello/main.go:25 +0x76

Previous write at 0x0000011a5118 by goroutine 6:
  main.incCount()
      /Users/xxx/code/go/src/flysnow.org/hello/main.go:28 +0x9a

Goroutine 7 (running) created at:
  main.main()
      /Users/xxx/code/go/src/flysnow.org/hello/main.go:17 +0x77

Goroutine 6 (finished) created at:
  main.main()
      /Users/xxx/code/go/src/flysnow.org/hello/main.go:16 +0x5f
==================
4
Found 1 data race(s)

   

Look, find a resource competition, and even the problem in that line of code is marked. Goroutine 7 reads the shared resource at line 25 of code value := count, and goroutine 6 is modifying the shared resource at line 28 of code count = value, and both goroutines are started from the main function, at lines 16, 17 through gokeywords.

Since we already know the problem of shared resource competition, it is because two or more goroutines are reading and writing to it at the same time, so we only need to ensure that only one goroutine can read and write at the same time. Now we will look at the traditional solution. Resource competition method-lock resources.

Go language provides some functions in the atomic package and sync package to synchronize shared resources. Let's first look at the atomic package.

package main

import (
	"fmt"
	"runtime"
	"sync"
	"sync/atomic"
)

var (
	count int32
	wg    sync.WaitGroup
)

func main() {
	wg.Add(2)
	go incCount()
	go incCount()
	wg.Wait()
	fmt.Println(count)
}

func incCount() {
	defer wg.Done()
	for i := 0; i < 2; i++ {
		value := atomic.LoadInt32(&count)
		runtime.Gosched()
		value++
		atomic.StoreInt32(&count,value)
	}
}

   

Pay attention to here atomic.LoadInt32and atomic.StoreInt32two functions, one is to read the value of int32 type variable and the other is to modify the value of int32 type variable. Both of these are atomic operations. Go has helped us use the locking mechanism at the bottom to ensure shared resources. Synchronization and security, so we can get the correct results. At this time, we will use the resource competition detection tool to go build -racecheck, and there will be no problems.

There are many atomic functions in the atomic package to ensure concurrent resource synchronization access and modification. For example, a function atomic.AddInt32can directly modify an int32 type variable. How many more functions are added to the original value is also atomic. , No more examples here, you can try it yourself.

 

Although atomic can solve the problem of resource competition, the comparison is relatively simple, and the supported data types are also limited. Therefore, the Go language also provides a sync package. This sync package provides a mutually exclusive lock that allows us You can flexibly control which codes and only one goroutine can access at the same time. The code range controlled by the sync mutex is called the critical section. The code of the critical section can only be accessed by another goroutine at the same time. In the example just now, we can still do this transformation.

package main

import (
	"fmt"
	"runtime"
	"sync"
)

var (
	count int32
	wg    sync.WaitGroup
	mutex sync.Mutex
)

func main() {
	wg.Add(2)
	go incCount()
	go incCount()
	wg.Wait()
	fmt.Println(count)
}

func incCount() {
	defer wg.Done()
	for i := 0; i < 2; i++ {
		mutex.Lock()
		value := count
		runtime.Gosched()
		value++
		count = value
		mutex.Unlock()
	}
}
   

In the example, a mutex is newly declared mutex sync.Mutex. There are two methods for this mutex, one is mutex.Lock()and the other is mutex.Unlock(). The area between the two is the critical area, and the code in the critical area is safe.

In the example, we first call mutex.Lock()to lock the code with competing resources, so that when a goroutine enters this area, other goroutines cannot enter and can only wait until the call is mutex.Unlock()released to release the lock.

 

This method is more flexible and allows the code writer to arbitrarily define the code range that needs to be protected, that is, the critical section. In addition to atomic functions and mutex locks, Go also provides us with functions that are easier to synchronize in multiple goroutines. This is the channel chan, which we will talk about in the next article.

Guess you like

Origin blog.csdn.net/qq_32907195/article/details/112398676