Go Concurrency

Original link: http://maoqide.live/post/golang/go-concurrency/
Go language a big advantage compared to other languages, it is a convenient and efficient writing concurrent code. This article describes the specific concurrency go language and the methods used for concurrent programming languages go.

Concurrency model

  • Thread lock
  • The Actor
    the Actor model is a model on a concurrent operation. "Actor" is an abstraction on a program that is considered the concurrent operation of the base unit: When an actor receives a message, it can make some decisions, create more actor, sending more messages decision how to answer the next message.
  • CSP (Communicating Sequential Processes)
    similarly communicating sequential processes, the Actor model, the difference is not concerned CSP messaging entity, transmitting the message channel concerned only (channel).

    golang concurrency model based CSP.

    Recommended books: "Seven seven weeks concurrency model."

goroutine scheduling

MPG

  • G: goroutine, coroutine saved state, and the required stack space information when executed.
  • P: processor, save goroutine queues, scheduling can be performed goroutine, you can control the number of concurrent goroutine.
  • M: machine, system thread, goroutine G will be scheduled for execution to M through P.

FIG particular relationship is as follows:

Dispatch

  • System call
    if G is blocked on an operating system call, then not only will block G, will also perform M solution tied to the G P (sysmon essence is taken away), and G to enter the sleep state together. If there are idle at this time of M, then its bound to continue to perform other P G; if there is no idle M, but still have to go to other G executed, it will create a new M.
  • channel / IO
    if G is blocked on an operating channel or network I / O operations, G are placed into a wait queue, and the next tries to run a runnable M, G; G if this time is not runnable m for operation, then the unbundling P m, and enters the sleep state. . When the I / O available channel or the operation is completed, G wait in the queue will be awakened, marked as Runnable, placed in a queue P, proceed to bind a M.
  • Preemptive scheduling
    sysmon issued to the long-running (10ms) of G preemptive scheduling tasks, once seize the flag G is set to true, then the next call to be the first G function or method, runtime G can be preempted and out of the running state, into the local runq P, the waiting time is scheduled.

sysmon

Go when the program starts, runtime will be called to start a m sysmon of (generally referred to as the monitoring thread), which bind p m without the need to run, to start once every 20us ~ 10ms, is responsible for the following tasks:

  • Idle for more than 5 minutes of the release of physical memory span;
  • If more than two minutes no garbage collection , enforcement;
  • Adding to the long untreated netpoll result task queue;
  • G issued to the long-running task preemptive scheduling ;
  • Syscall long time to recover due to obstruction of P;

usage

goroutine

goroutine have a simple model: it is a function performed simultaneously with other goroutine the same address space. goroutine is lightweight, small initial allocation of space, can be (and release) the dynamic heap memory allocation needs.
goroutine usage:

go func(){
    // do something
}()

channel

The above example is very simple, is not applicable in the actual project because goroutine not send signal report sent to other functions, when his own end, and the result of the execution, at this time, we need a channel.

Initializing a channel (channel into a buffer and unbuffered):

ci := make(chan int)            // unbuffered channel of integers
cj := make(chan int, 0)         // unbuffered channel of integers
cs := make(chan *os.File, 100)  // buffered channel of pointers to Files

channel general usage:

c := make(chan int)  // Allocate a channel.
// Start the sort in a goroutine; when it completes, signal on the channel.
go func() {
    list.Sort()
    c <- 1  // Send a signal; value does not matter.
}()
doSomethingForAWhile()
<-c   // Wait for sort to finish; discard sent value.

c <- 1channel in <-the left symbol, indicates transmission channel to the data <- cchannel in <-representing the received data to the right, then there may be a corresponding left responsible for receiving the data types of variables, or the default data is discarded.
the receiving end of the channel will block until the data is sent over.
If unbuffered Channel, the transmit end can transmit the data in the block until the receiver receives the value. If there is a buffer channel, then the sender will be blocked only when the buffer is full, if the buffer is full, then wait for a receiver receives a certain value.

Example 1:

var sem = make(chan int, MaxOutstanding)

func handle(r *Request) {
    sem <- 1    // Wait for active queue to drain.
    process(r)  // May take a long time.
    <-sem       // Done; enable next request to run.
}

func Serve(queue chan *Request) {
    for {
        req := <-queue
        go handle(req)  // Don't wait for handle to finish.
    }
}

Example 1 has a problem, even if the channel buffer has a maximum limit, but came in a request every, but goroutine server created and there is no limit, in theory, there may be an infinite number of goroutine run, so we need to modify the restrictions under the Server number goroutine parallel.

Example 2:

func Serve(queue chan *Request) {
    for req := range queue {
        sem <- 1
        go func() {
            process(req) // Buggy; see explanation below.
            <-sem
        }()
    }
}

When the buffer is full, sem <- 1it will be blocked and will not create a new goroutine. Example 2 but there is also a problem: golang the forloop, the loop variable in each cycle is multiplexed, the reqvariables will be shared by all goroutines, which ultimately leads to all requests are the same. This problem can be solved by a closure:

Example 3:

func Serve(queue chan *Request) {
    for req := range queue {
        sem <- 1
        go func(req *Request) {
            process(req)
            <-sem
        }(req)
    }
}

Another solution ideas, more worker:

func handle(queue chan *Request) {
    for r := range queue {
        process(r)
    }
}
func Serve(clientRequests chan *Request, quit chan bool) {
    // Start handlers
    for i := 0; i < MaxOutstanding; i++ {
        go handle(clientRequests)
    }
    <-quit  // Wait to be told to exit.
}

Channels of channels

channel itself go one native language type can also be passed through the channel. This feature allows us to easily achieve a lot of useful features.
Example 1 (non-blocking parallel frame RPC):

type Request struct {
    args        []int
    f           func([]int) int
    resultChan  chan int
}

// 调用端
func sum(a []int) (s int) {
    for _, v := range a {
        s += v
    }
    return
}
request := &Request{[]int{3, 4, 5}, sum, make(chan int)}
// Send request
clientRequests <- request
// Wait for response.
fmt.Printf("answer: %d\n", <-request.resultChan)


// Server 端
func handle(queue chan *Request) {
    for req := range queue {
        req.resultChan <- req.f(req.args)
    }
}
func Serve(clientRequests chan *Request, quit chan bool) {
    // Start handlers
    for i := 0; i < MaxOutstanding; i++ {
        go handle(clientRequests)
    }
    <-quit  // Wait to be told to exit.
}

Example 2 (a unix pipe characteristics, streaming):

type PipeData struct {
    value int
    handler func(int) int
    next chan int
}

// 流式处理
func handle(queue chan *PipeData) {
    for data := range queue {
        data.next <- data.handler(data.value)
    }
}

pprof

golang built runtime / pprof package can code performance analysis and monitoring, and on this basis, encapsulates the net / http / proof package, web refer to this project as long as the package, web services will automatically add the path debug / pprof of, you can perform basic performance analysis.
If you are using the built-net / http web service package, introduced directly in the main function _ "net/http/pprof", you can view pprof information directly debug / pprof / path browser.
If a third party other frame go web used, it is necessary to manually add the corresponding routing path, e.g. gin may be added such https://github.com/gin-contrib/pprof/blob/master/pprof.go .

By pprof package, coupled with the go tool pprof tool, you can easily go for performance information program for analysis.
First start a quote? The net / web project http / proof bag, the project is listening on port 8080.
After the start with hey analog web request. hey -z 2m -c 10 -q 2 -H 'token: xxxxxxxxxx' 'http://127.0.0.1:8080/xxx'
Please request during the execution go tool pprof test http://127.0.0.1:8080/v1/debug/pprof/profile(which test a binary file compiled web service), you can obtain performance information during this period. As follows:

File: engine
Type: cpu
Time: Aug 3, 2019 at 3:23pm (CST)
Duration: 30s, Total samples = 580ms ( 1.93%)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top
Showing nodes accounting for 460ms, 79.31% of 580ms total
Showing top 10 nodes out of 210
      flat  flat%   sum%        cum   cum%
     120ms 20.69% 20.69%      120ms 20.69%  syscall.syscall
     110ms 18.97% 39.66%      110ms 18.97%  runtime.pthread_cond_wait
      60ms 10.34% 50.00%       60ms 10.34%  runtime.pthread_cond_signal
      40ms  6.90% 56.90%       40ms  6.90%  runtime.kevent
      40ms  6.90% 63.79%       40ms  6.90%  runtime.nanotime
      20ms  3.45% 72.41%       20ms  3.45%  encoding/json.checkValid
      20ms  3.45% 75.86%       20ms  3.45%  runtime.notetsleep
      10ms  1.72% 77.59%       10ms  1.72%  encoding/json.(*decodeState).literalStore
      10ms  1.72% 79.31%       10ms  1.72%  encoding/json.stateInString
(pprof) 

Wherein Duration: 30s the performance information collection time, the default is 30s, the parameters can be specified pprof, when executed, go tool pprof test http://127.0.0.1:8080/debug/pprof/profile?seconds=60to adjust the time.
In pprof interactive command line, the input function can view the top time-consuming, as a result, each row represents a function of information. The first two columns represent time and the function of the percentage of running on the CPU; the third column is the current cumulative proportion of all functions of the CPU; fourth column and the fifth column represents the function and operation time and the ratio of occupied sub-functions (also referred accumulated value cumulative), the value of the former should be greater than or equal to two; the last column is the name of the function. If the application performance problems, the above information should be able to tell us the time spent on the execution of the function which.
Web command input, it is possible to generate a function call svg format diagram of the entire application, the browser can be opened directly, and the larger the block diagram represents a function, the function indicates the longer performed.
There are other useful commands, you can type help to view.
Examples of profile information is acquired, but also through other debug url, obtain additional information, such as goroutine, heap like go tool pprof http://127.0.0.1:8080/debug/pprof/heap.

In addition to interactive command-line, go tool pprof also provides a web page, add -http above command parameters go tool pprof -http=":8081" engine http://127.0.0.1:8080/debug/pprof/profile, you can start at the specified port a web service, you can view a variety of information and performance charts for the entire application through web services analysis flame like FIG.

reference:

Guess you like

Origin www.cnblogs.com/maoqide/p/11295462.html