Concurrent programming based on go, goroutine


When we want to implement concurrent programming in goroutine in java/c++, we usually need to maintain a thread pool by ourselves, and we need to package one task after another, and at the same time we need to schedule threads to execute tasks and maintain context switching. All this usually Will consume a lot of programmers' minds. So can there be a mechanism where programmers only need to define a lot of tasks and let the system help us allocate these tasks to the CPU for concurrent execution?

The goroutine in the Go language is such a mechanism. The concept of a goroutine is similar to a thread, but the goroutine is scheduled and managed by the Go runtime. The Go program will intelligently allocate the tasks in the goroutine to each CPU. The Go language is called a modern programming language because it has built-in scheduling and context switching mechanisms at the language level.

In Go language programming, you don’t need to write processes, threads, and coroutines yourself. There is only one skill in your skill pack-goroutine. When you need to execute a task concurrently, you only need to package this task into one. Function, just start a goroutine to execute this function, it's that simple and rude.

The use of goroutine The use of goroutine in the
Go language is very simple. You can create a goroutine for a function by adding the go keyword in front of the function when calling the function.

A goroutine must correspond to a function, and multiple goroutines can be created to execute the same function.

The use of
goroutine The way to start a goroutine is very simple, just add a go keyword in front of the called function (normal function and anonymous function).

package main

import (
	"fmt"
	"time"
)

func testGoroutine() {
    
    
	for i := 0; i <= 9; i++ {
    
    
		fmt.Println("这是一个goroutine")
	}
}

func main() {
    
    
	go testGoroutine()
	for i := 0; i <= 9; i++ {
    
    
		fmt.Println("这是main函数")
	}
	// 在程序启动时,Go程序就会为main()函数创建一个默认的goroutine。
	// 当main()函数返回的时候该goroutine就结束了,所有在main()函数中启动的goroutine会一同结束,
	// main函数所在的goroutine就像是权利的游戏中的夜王,其他的goroutine都是异鬼,夜王一死它转化的那些异鬼也就全部GG了。
	time.Sleep(time.Second)
}

Open multiple goroutines
using sync.WaitGroup and wait for the goroutine to end

package main

import (
	"fmt"
	"sync"
)

var wy sync.WaitGroup

func testGoroutine() {
    
    
	//goroutine运行结束-1
	defer wy.Done()
	for i := 0; i <= 9; i++ {
    
    
		fmt.Println("这是testGoroutine函数")
	}
}

func goroutine() {
    
    
	//goroutine运行结束-1
	defer wy.Done()
	for i := 0; i <= 9; i++ {
    
    
		fmt.Println("这是goroutine函数")
	}
}

func main() {
    
    
	//开启了两个goroutine
	wy.Add(2)
	go testGoroutine()
	for i := 0; i <= 9; i++ {
    
    
		fmt.Println("这是main函数")
	}
	go goroutine()
	//当wy为0时就结束等待
	wy.Wait()
}

goroutine and thread

Growable stack
OS threads (operating system threads) generally have a fixed stack memory (usually 2MB), the stack of a goroutine at the beginning of its life cycle is just a small stack (2KB in the classic case), the stack of a goroutine is not Fixed, he can increase or decrease as needed. The goroutine's stack size limit can be up to 1GB, although that size is rarely used. So it is also possible to create about 100,000 goroutines in go language at a time

The goroutine scheduling
GPM is implemented by the runtime layer of the Go language. It is a scheduling system implemented by the Go language itself. It is different from the operating system scheduling OS thread
G. It is well understood. It is a goroutine. In addition to storing goroutine information, there are Information such as binding to
P manages a group of goroutines, and P stores the context of the current goroutine running (function pointer, stack address and address boundary). P will make some scheduling for the goroutine queue it manages, (such as occupying The goroutine with longer CPU time is suspended, running subsequent goroutines, etc.) When its own queue is consumed, it will fetch the goroutine from the global queue. If the goroutine in the global queue is consumed, it will go to other P to grab the task
M (machine) The runtime virtualizes the operating system kernel. M and kernel threads generally have a one-to-one mapping relationship. A goroutine will eventually run on M

P and M generally have a one-to-one correspondence. Their relationship is that P manages a group of G mounted to M to run. When a G is blocked on M for a long time, the runtime will create a new M. The P that blocked G will mount other Gs on the new M. When the old G is blocked or thinks it is dead, it will recycle M.

The number of P is set by runtime.GOMAXPROCS (maximum 256). After Go1.5 version, it defaults to the number of physical threads. When the amount of concurrency is large, some P and M will be added, but not too much. If the switching is too frequent, the gain will not be worth the loss.

From the perspective of thread scheduling alone, the advantage of the Go language compared to other languages ​​is that OS threads are scheduled by the OS kernel, and goroutines are scheduled by the Go runtime's own scheduler. This scheduler uses a scheduler called m:n scheduling technology (multiplexing/scheduling m goroutines to n OS threads). One of its major features is that goroutine scheduling is done in user mode, and does not involve frequent switching between kernel mode and user mode, including memory allocation and release. It maintains a large memory pool in user mode. Calling the malloc function of the system directly (unless the memory pool needs to be changed) is much cheaper than scheduling OS threads. On the other hand, the multi-core hardware resources are fully utilized, and several goroutines are approximately equally divided among the physical threads. Coupled with the ultra-light weight of the goroutine itself, the above all ensure the performance of go scheduling.

The GOMAXPROCS
Go runtime scheduler uses the GOMAXPROCS parameter to determine how many OS threads need to be used to execute the Go code at the same time. The default value is the number of CPU cores on the machine. For example, on an 8-core machine, the scheduler will dispatch Go code to 8 OS threads at the same time (GOMAXPROCS is n in m:n scheduling).

In Go language, the number of CPU logic cores occupied by the current program can be set by the runtime.GOMAXPROCS() function.

Before Go1.5, single-core execution was used by default. After Go1.5 version, all CPU logic cores are used by default.

package main

import (
	"fmt"
	"runtime"
	"sync"
)

var wy sync.WaitGroup

func testGoroutine() {
    
    
	//goroutine运行结束-1
	defer wy.Done()
	for i := 0; i <= 9; i++ {
    
    
		fmt.Println("这是testGoroutine函数")
	}
}

func goroutine() {
    
    
	//goroutine运行结束-1
	defer wy.Done()
	for i := 0; i <= 9; i++ {
    
    
		fmt.Println("这是goroutine函数")
	}
}

func main() {
    
    
	wy.Add(2)
	runtime.GOMAXPROCS(2)
	go testGoroutine()
	go goroutine()
	wy.Wait()
}

Guess you like

Origin blog.csdn.net/weixin_44865158/article/details/114998623