Go concurrent language parsing

Go before seen a concurrent language introduced: https: //www.cnblogs.com/pdev/p/10936485.html But this is too simple matter. Aspect depth below

Remember https://www.cnblogs.com/pdev/p/11095475.html we wrote a simple reptile. Go inside this uses two concurrent ways:

 


1. Go routines and Go channels (ConcurrentChannel), which is a language-specific Go concurrent manner, simplify programming

1.1 Go routines

Goroutines can be seen as a lightweight thread. Goroutine create a very simple, just need to go before the keyword in the function call statement. To illustrate how easy it is, we create two finder function and calls with go, so that they each find "ore" is printed out.

package main
import (
    "fmt"
    "time"
    "math/rand"
)

func finder(mines [5]string, coreid int) {
    <-time.After(time.Second * time.Duration(coreid))
    rand.Seed(time.Now().UnixNano())
    idx := rand.Intn(5)
    fmt.Println(time.Now(), coreid, mines[idx])
}

func main() {
    theMine := [5]string{"rock", "ore", "gold", "copper", "sliver"}
    go finder(theMine, 1)
    go finder(theMine, 2)
    <-time.After(time.Second * 3) //you can ignore this for now
    fmt.Println(time.Now(), "END")
}

程序的输出如下:
F:\My Drive\19summer\
6824>go run gor.go 2019-08-01 17:45:41.0985917 -0500 CDT m=+1.001057201 1 ore 2019-08-01 17:45:42.0986489 -0500 CDT m=+2.001114401 2 ore 2019-08-01 17:45:43.0987061 -0500 CDT m=+3.001171601 END

As can be seen from the execution time of two finder is run concurrently

But these two threads are independent of each other. If they need to communicate this information? We need to Go channel up.

 

1.2 Go Channel

Channels allow mutual communication between the go routines. You can channel seen as a pipeline, goroutines can be entered, and a message, you can also go routines from which to receive other messages.

myFirstChannel := make(chan string)

Goroutines may send a message to the channel, may receive messages from. This is indicated by the arrow operator - complete, which indicates the data flow in the channel (<).

myFirstChannel <-"hello" // Send

myVariable := <- myFirstChannel // Receive

 Let's look at a program:

package main
import (
    "fmt"
    "time"
)

func main() {
    theMine := [5]string{"ore1", "ore2", "ore3", "ore4", "ore5"}
    oreChan := make(chan string)

    // Finder
    go func(mine [5]string) {
        for _, item := range mine {
            oreChan <- item //send
            fmt.Println("Miner: Send " + item + " to breaker")
        }
    }(theMine)

    // Ore Breaker
    go func() {
        for i := 0; i < 5; i++ {
            foundOre := <-oreChan //receive
            <-time.After(time.Nanosecond * 10)
            fmt.Println("Miner: Receive " + foundOre + " from finder")
        }
    }()

    <-time.After(time.Second * 5) // Again, ignore this for now
}

程序的输出如下:

F:\My Drive\19summer\6824>go run gor2.go
Miner: Send ore1 to breaker
Miner: Receive ore1 from finder
Miner: Send ore2 to breaker
Miner: Receive ore2 from finder
Miner: Send ore3 to breaker
Miner: Receive ore3 from finder
Miner: Send ore4 to breaker
Miner: Receive ore4 from finder
Miner: Send ore5 to breaker
Miner: Receive ore5 from finder

We can see that can communicate between threads through the go channel it!

Between the receive and fmt.Println <-time.After (time.Nanosecond * 10) in order to facilitate the command line to view the output, or because the cpu to run the program too fast, and command-line print order will actually run in a different order .

 

1.3 blocked Go Channel

By default, save the message channel and take messages are blocked (called a channel unbuffered). That is, unbuffered channel message and the time taken will be stored pending messages goroutines current, unless the other end is ready. Channels blocking goroutines occur in a variety of situations. This can goroutines before each run merrily achieve short synchronization between each other.

Blocking on a Send: Once a goroutine (Gopher) sends a data channel, it is blocked until another goroutine removed from the data channel.

Blocking on a Receive: when transmitting and similar situations, when the channel is empty, a goroutine may block waiting to acquire the data from a channel.

The concept came into contact with a blocking may cause some confusion, but you can think of it as a transaction between two goroutines (gophers). One gopher either waiting for money or give money, we need to wait for the other transaction occurs.

 

Now that you understand goroutine channel blocking communication different situations may occur through, let's discuss two different types of channels: unbuffered and buffered. Choose which channel to use may alter the operational performance of the program.

Unbuffered Channels: In the previous example we have been using unbuffered channels, their unique place that only one copy of each data can be passed. In any case, we tested the unbuffered channel size is 0 (len (channel))

Buffered Channels: In concurrent program, the time coordination is not always perfect. In the case of mining, we may encounter such a situation: the processing time of an ore mining gopher spent, to find mine gohper may have found three of the ore. In order not to waste a lot of time to find mine gopher gopher waiting to transfer ore mining, we can use buffered Channel. Let's create a capacity of buffered channel 3.

bufferedChan := make(chan string, 3)

buffered and unbuffered channels work like, but there is one difference - the need to take away another before gorountine data, we can send 3 copies of data to the buffered channel, and before the buffer is full are blocking does not occur, and when the first four data blocking occurs when sent me. In other words, the buffer channel will be locked when at full capacity.

Unbuffered channel may be understood to make (chan string, 0) 

For example, the following program:

package main
import (
    "fmt"
    "time"
)

func main() {
    bufferedChan := make(chan string, 3)

    go func() {
        bufferedChan <-"first"
        fmt.Println("Sent 1st")
        bufferedChan <-"second"
        fmt.Println("Sent 2nd")
        bufferedChan <-"third"
        fmt.Println("Sent 3rd")
    }()

    <-time.After(time.Second * 1)

    go func() {
        firstRead := <- bufferedChan
        fmt.Println("Receiving..")
        fmt.Println(firstRead)
        secondRead := <- bufferedChan
        fmt.Println(secondRead)
        thirdRead := <- bufferedChan
        fmt.Println(thirdRead)
    }()

    <-time.After(time.Second * 5) // Again, ignore this for now
}

输出结果如下:

F:\My Drive\19summer\6824>go run gor2.go
Sent 1st
Sent 2nd
Sent 3rd
Receiving..
first
second
third

Compared to the original example, has been greatly improved! Now every function operates independently in their respective goroutines in. In addition, each time a processed ore, it will be brought into the next stage of the pipeline mining.

 

In fact, the channel is a FIFO buffer, we can see the queue buffer channel as a thread-safe:

func main() {
    ch := make(chan int, 3)
    ch <- 1
    ch <- 2
    ch <- 3

    fmt.Println(<-ch) // 1
    fmt.Println(<-ch) // 2
    fmt.Println(<-ch) // 3
}

 

 1.4 Other concepts

Anonymous Goroutines

 We can use the following way to create an anonymous function and run its goroutine in. If you only need to call a function in this way we can make it run on its own goroutine without the need to create a formal function declaration.

go func() {
    fmt.Println("I'm running in my own go routine")
}()

And anonymous functions much like the definition of

 

main function is a goroutine

main function is actually running its own goroutine in! More importantly, you know, once the main function returns, it will turn off the other goroutines currently running . That is why we in the main function last set a timer - it creates a channel, and sends a value after 5 seconds. By adding the above line of code, main routine will be blocked in order to run for other goroutines 5 seconds. Otherwise, the main thread will end prematurely, causing no opportunity to perform finder

<-time.After(time.Second * 5) // Receiving from channel after 5 sec

But wait adopted approach is not good, as if there is a thread.join like Python () to block the main thread, the thread waits for all child ran just fine.

There is a way blocked main function until all other goroutines are run to completion. The usual practice is to create a done channel, main function is blocked while waiting to read it. Once the work is completed, the channel to transmit data, programs will end.

func main() {
    doneChan := make(chan string)

    go func() {
        // Do some work…
        doneChan <- "I'm all done!"
    }()

    <-doneChan // block until go routine signals work is done
}

 

You can traverse the channel

In the previous example we let miner iteration three times in a for loop to read data from the channel. If we can not know exactly how many pieces of ore from the finder receives it?

In analogy to the aggregate data type (Note: Slice) traverse, you can traverse a channel. In front of the miner update function, we can write:

// Ore Breaker
go func() {
    for foundOre := range oreChan {
        fmt.Println("Miner: Received " + foundOre + " from finder")
    }
}()

As the miner needs to read the finder of all data sent to it, you can traverse the channel to make sure we received all the data has been sent.

 

Note that traverse the channel blocks until new data is sent to the channel. A deadlock occurs following program:

func main() {
    ch := make(chan int, 3)
    ch <- 1
    ch <- 2
    ch <- 3

    for v := range ch {
        fmt.Println(v)
    }
}

The reason is that not until the range is not the end of the closed channel read. That is, if the buffer channel dried up, then the range will block the current goroutine, so deadlock slightly. The only way to avoid the go routine blocked after sending the data is to turn off all channel with "close (channel)". Following procedure

CH: = the make (int Chan,. 3 ) 
CH <-. 1 
CH <- 2 
CH <-. 3 // explicitly closed channel Close (CH) for V: = Range CH { 
    fmt.Println (V) 
}




 The channel is closed disables the data flows, is read-only. We can still retrieve data from a closed channel, but can not write the data.

 

To channel non-blocking write (do not worry channel empty / full causing obstruction)

There is a technique, use Go's select case statement can be read on the channel is non-blocking. By using this such a statement, if there is a data channel, goroutine will read from it, otherwise execute the default branch.

myChan := make(chan string)

go func(){
    myChan <- "Message!"
}()

select {
case msg := <- myChan: fmt.Println(msg) default: fmt.Println("No Msg") }
<-time.After(time.Second * 1) select { case msg := <- myChan: fmt.Println(msg) default: fmt.Println("No Msg") } 程序输出如下: No Msg Message!

Non-blocking write is to use the same select case statement to achieve, the only difference is that, case statement looks like sending instead of receiving.

select {
    case myChan <- "message":
        fmt.Println("sent the message")
    default:
        fmt.Println("no message sent")
}

 

1.5 Parallel and Concurrent

By default, Go all goroutines only run in a thread (a cpu core) inside. That is, the two go routine is not parallel, but concurrent. In the same native thread, if the current goroutine blocking does not occur, it is not going to give up CPU time to other goroutines same thread, which is the Go runtime scheduling goroutine, we can also use the runtime package to manual scheduling.

Preceded by a sleep program at the time of such "parallel", and because sleep is a function of the current blocking out goroutine, goroutine current initiative to other goroutine execution, so the formation of the parallel logic, which is concurrent. For the following program is a one of two goroutine, the result is always the same print:

var quit chan int = make(chan int)

func loop() {
    for i := 0; i < 10; i++ {
        fmt.Printf("%d ", i)
    }
    quit <- 0
}

func main() {
    go loop()
    go loop()

    for i := 0; i < 2; i++ {
        <- quit
    }
}

F:\My Drive\19summer\6824>go run gor2.go
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9

 

There is a very interesting example: https://segmentfault.com/q/1010000000207474

 

In order to achieve true multi-core parallel, we need to use runtime packages (runtime package is goroutine scheduler) to explicitly specify the use of two cores. There are two implementations:

1. Specify the use of several nuclear

Package main
 Import (
     "FMT" 
    "Runtime" 
) 

var quit Chan int = the make (Chan int) 

FUNC Loop (COREid int) { 
    for I: = 0; I <1000; I ++ { // for observation, to run multiple more 
        fmt. printf ( "% D% D-" , COREid, I) 
    } 
    quit <- 0 
} 

FUNC main () { 
    runtime.GOMAXPROCS ( 2) // maximum of 2 cores 

    Go Loop ( 0 ) 
    Go Loop ( . 1 ) 

    for I : = 0; I <2; I ++ {
         <- quit 
    } 
}

This output will be two irregular threads are alternately output to achieve true parallel

2. explicitly give up the CPU time (in fact, this initiative to the way the CPU time is still running in single-core, but manually switch goroutine led to the looks of a "parallel.")

Package main
 Import (
     "FMT" 
    "Runtime" 
) 

var quit Chan int = the make (Chan int) 

FUNC Loop (COREid int) { 
    for I: = 0; I <10; I ++ { // for observation, to run multiple more 
        runtime. Gosched () // explicitly give up the CPU time to other goroutines 
        fmt.Printf ( "D-% D%" , COREid, I) 
    } 
    quit <- 0 
} 

FUNC main () { 
    Go Loop ( 0 ) 
    Go Loop ( . 1 ) 

    for I: = 0; I <2; I ++ {
         <- quit 
    } 
}

The output is very regular alternating:
F.: \ My Drive \ 19summer \ 6824> Go RUN gor2.go
1-0 0-0 0-1 1-2 0-2 1-3 0-3 1-1 of l- 40-41-50-51-6 0-6 1-7 0-7 1-80-81-90-9


 About runtime package several functions:

  • Gosched let the cpu
  • NumCPU Returns the current system the number of CPU cores
  • GOMAXPROCS set the maximum number of CPU cores can be used simultaneously
  • Goexit exit the current goroutine (but defer statements executed as usual)

We know that " the process is the smallest unit of resource allocation, the thread is the smallest unit CPU scheduling ." So go routine and thread what does it matter? You can go see the official document paragraph (https://golang.org/doc/faq#goroutines):

Why goroutines instead of threads?

Goroutines are part of making concurrency easy to use. The idea, which has been around for a while, is to multiplex independently executing functions—coroutines(协程)—onto a set of threads. When a coroutine blocks, such as by calling a blocking system call, the run-time automatically moves other coroutines on the same operating system thread to a different, runnable thread so they won't be blocked. The programmer sees none of this, which is the point. The result, which we call goroutines, can be very cheap: they have little overhead beyond the memory for the stack, which is just a few kilobytes.

To make the stacks small, Go's run-time uses resizable, bounded stacks. A newly minted goroutine is given a few kilobytes, which is almost always enough. When it isn't, the run-time grows (and shrinks) the memory for storing the stack automatically, allowing many goroutines to live in a modest amount of memory. The CPU overhead averages about three cheap instructions per function call. It is practical to create hundreds of thousands of goroutines in the same address space. If goroutines were just threads, system resources would run out at a much smaller number.

 

Coroutine will be appreciated as the same thread through the context switching to a "hyper-threading", concurrent execution of two work. (  Https://www.liaoxuefeng.com/wiki/897692888725344/923057403198272 )

For processes, threads, scheduling is the kernel, the concept of CPU time slices, conducted preemptive scheduling (there are a variety of scheduling algorithms) 
for coroutines (ULT), which is transparent to the kernel, which is the system and I do not know the existence of coroutines, is completely scheduled by the user's own program, because the program itself is controlled by the user, it is difficult to do so as preemptive scheduling mandatory CPU control switch to other processes
/ threads, usually only cooperative scheduling, on their own initiative after coroutine need to transfer out of control, the other co-routines to be performed to.
Essentially, goroutine is coroutines. The difference is, Golang is encapsulated in runtime, the system calls and other aspects of goroutine scheduling
and processing, when it comes to long-running or system calls, will take the initiative to current goroutine of CPU (P) transfer out, so that other goroutine
be scheduling and implementation , which is supported by the Association Golang drive from the language level. Golang is a major feature of the language from native support level coroutine, function or
add go keyword can create a coroutine in front of those methods.

https://www.cnblogs.com/liang1101/p/7285955.html

 

Suppose we opened three Goroutine, but allocated only two cores (two threads), what will happen then? Write sections of the program to test it:

main Package 

Import (
     "FMT" 
    "Runtime" 
) 

var quit Chan int = the make (Chan int) 

FUNC Loop (int ID) { // ID: goroutine the reference numerals 
    for I: = 0; I <100; I ++ { // 10 goroutine print the label 
        fmt.Printf ( "% D" , ID) 
    } 
    quit <- 0 
} 

FUNC main () { 
    runtime.GOMAXPROCS ( 2) // up simultaneously using two cores 

    for I: = 0; I <. 3; I ++ { // open three goroutines 
        Go Loop (I) 
    } 

    for I: = 0; I <. 3; I ++ {
         <-quit 
    } 
} 


The output there are many: 
F.: \ My Drive \ 19summer \ 6824> Go gor2.go RUN
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0,000,000,000,000,000,000 0 0 0 0 
0 0 0,000,000,000,000,000,001 1,111,111,111,111,111,111 
. 1. 1. 1. 1. 1. 1 1,111,111,111,111,111,111. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1 2 2 2 
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2,222,222,222,222,222,222 2 2 2 2 2 
2 2 2 2 2 2 2 2,222,222,222,222,222,222 2 2 2 2 2 2 2 2 2 2 2 2 2 2 
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1,111,111,111,111,111,111 . 1. 1. 1. 1 
. 1 1,111,111,111,111,111,111
F:\My Drive\19summer\6824>go run gor2.go
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
F:\My Drive\19summer\6824>
  • Sometimes preemptive output occurs (explained Go to open more than one native threads, to achieve true parallel)
  • May be sequentially output, printing the reprinting 1 0, then print 2 (described Go open a native threads, the threaded goroutines does not release blocking CPU)

Before a digital output of all outputs in all the other numbers: Well, we will observe a phenomenon, whether it is to seize the output or output order, then there will be two numbers to show this phenomenon

The reason is that three goroutine distributed to up to two threads, will be at least two goroutine assigned to the same thread, does not block a single thread in the goroutine not release CPU, it occurred sequentially output.

 

Ref:

Some applications Go concurrency: https://blog.csdn.net/kjfcpua/article/details/18265475

https://stackoverflow.com/questions/13107958/what-exactly-does-runtime-gosched-do

https://studygolang.com/articles/13875

https://blog.csdn.net/kjfcpua/article/details/18265441

https://blog.csdn.net/kjfcpua/article/details/18265461

 


 

2. Based on the concurrent (ConcurrentMutex) shared variables, to be understood that the use of a conventional lock / unlock signal and the amount of manual handling of concurrency

 

 

 

1111

Guess you like

Origin www.cnblogs.com/pdev/p/11286349.html