Go concurrent programming - Timer, Ticker, WaitGroup and other common models

Go concurrent programming - Timer, Ticker, WaitGroup usage

1 Timer (execute once)

1.1 Concept

When you need to perform a task after a period of time, you can use time.Timer. Timer will send a time value to a channel after a period of time, which can be used to trigger the execution of tasks. Specifically, when you need to execute a task after a period of time, you can create a Timer, and then use the <-timer.C statement to read the time value from the Timer.C channel. When the time value is read, execute Task.

1.2 Usage

package main

import (
	"fmt"
	"time"
)

func main() {
    
    
	timer := time.NewTimer(2 * time.Second)
	defer timer.Stop()
	<-timer.C
	fmt.Println("timer expired")
}

result:

timer expired

2 Ticker (executed multiple times regularly)

2.1 Concept

When you need to perform a task regularly, you can use time.Ticker. Ticker will periodically send a time value to a channel, which can be used to trigger the execution of periodic tasks. Specifically, when you need to execute a certain task regularly, you can create a Ticker, and then use a for range loop to read the time value from the Ticker.C channel. Whenever the time value is read, the regular task is executed.

2.2 Usage

package main

import (
	"fmt"
	"time"
)

func main() {
    
    
	ticker := time.NewTicker(1 * time.Second)
	defer ticker.Stop()
	for {
    
    
		select {
    
    
		case <-ticker.C:
			fmt.Println("tick")
		}
	}
}

result:

tick
tick
tick
tick
...

3 WaitGroup (wait for all goroutines to complete)

3.1 Concept

When you need to wait for a group of goroutines to complete, you can use sync.WaitGroup. WaitGroup is used to wait for a group of goroutines to complete their work. Specifically, when a goroutine needs to wait for a group of other goroutines to complete, it can call the Add method of WaitGroup to increment the counter value and then start the group of goroutines. When each goroutine completes, the Done method should be called to decrement the counter. Finally, while waiting for this set of goroutines to complete, the Wait method should be called, which blocks until the counter reaches 0.

3.2 Usage

Basic usage:

package main

import (
	"fmt"
	"sync"
)

func main() {
    
    
	var wg sync.WaitGroup
	for i := 0; i < 5; i++ {
    
    
		wg.Add(1)
		go func(i int) {
    
    
			defer wg.Done()
			fmt.Printf("goroutine %d\n", i)
		}(i)
	}
	wg.Wait()
	fmt.Println("all goroutines done")
}

4 others

4.1 Golang implements the producer-consumer model

①Simple version

This version contains only one producer and one consumer, using goroutine and channel to implement message passing.

  • The done variable is used to block in the main thread waiting for the end of all sub-coroutines. During the execution of the code, a signal is sent to the done variable (done <- true) to indicate that a sub-coroutine has completed the task. When all sub-coroutines have completed their tasks, executing <-done will block the main thread and wait for all sub-coroutines to complete their tasks. In produce and consume, done is only used for simple communication tasks and will not block the main thread.
package main

import (
	"fmt"
	_ "time"
)

func produce(ch chan<- int, done chan<- bool) {
    
    
	for i := 1; i < 3; i++ {
    
    
		ch <- i
		fmt.Printf("生产者生产 %d\n", i)
	}
	done <- true
}

func consume(ch <-chan int) {
    
    
	for i := range ch {
    
    
		fmt.Printf("消费者消费 %d\n", i)
	}
}

func main() {
    
    
	ch := make(chan int)
	done := make(chan bool)

	go produce(ch, done)
	go consume(ch)

	<-done
}

②Advanced version

This version contains multiple producers and consumers, implemented using buffered channels and waitgroups.

type Task struct{
    
    }  //自己实际需要的数据结构
producer()  //实际生产数据逻辑
consumer()  //实际处理逻辑

main()中的consumerNum(消费者个数)channelLen(通道长度)也可根据实际需要修改

Code:

package main

import (
	"fmt"
	"sync"
)

type Task struct {
    
    
	Data string
}

var wg sync.WaitGroup

//生产逻辑
func producer(tasks chan Task) {
    
    
	t := Task{
    
    }

	for i := 62; i < 72; i++ {
    
    
		t.Data = string(i)
		tasks <- t
	}
}

func producerDispatch(tasks chan Task) {
    
    
	defer close(tasks)

	producer(tasks)
}

//消费数据处理逻辑
func consumer(task Task) {
    
    
	fmt.Printf("consum task:%v\n", task)
}

func consumerDispatch(tasks chan Task) {
    
    
	defer wg.Done()

	for task := range tasks {
    
    
		consumer(task)
	}
}

func main() {
    
    
	//消费者个数
	var consumerNum = 10
	var channelLen = 50

	tasks := make(chan Task, channelLen)

    //当producer执行完成后关闭队列
	go producerDispatch(tasks)

	for i := 0; i < consumerNum; i++ {
    
    
		wg.Add(1)
        //当consumer消费完后wg.Done()
		go consumerDispatch(tasks)
	}
    //等待全部goroutine完成
	wg.Wait()
	fmt.Println("all done")
}

  1. After the producer completes the task, it calls close to close the channel.
  2. Because the channel is closed, the consumer will exit the for loop for fetching tasks after fetching all tasks.
  3. The main thread, main, uses Wait() to ensure that it exits only after all tasks are processed.
  4. wg.Add must be placed in main. Otherwise, main may have finished executing.
  5. The producerDispatch and consumerDispatch coroutines have not been scheduled yet, so they just print "all done" and exit. This is also a pit stepped on in the actual process.

③Advanced version

This version contains multiple producers and consumers, implemented using lock-free queues and multiple coroutine pools.

package main

import (
	"fmt"
	"math/rand"
	"sync"
	"time"
)

type Queue struct {
    
    
	items []int
	head  int
	tail  int
}

func NewQueue(size int) *Queue {
    
    
	return &Queue{
    
    make([]int, size), 0, 0}
}

func (q *Queue) Push(item int) bool {
    
    
	next := (q.tail + 1) % len(q.items)
	if next == q.head {
    
    
		return false
	}
	q.items[q.tail] = item
	q.tail = next
	return true
}

func (q *Queue) Pop() (int, bool) {
    
    
	if q.head == q.tail {
    
    
		return 0, false
	}
	item := q.items[q.head]
	q.head = (q.head + 1) % len(q.items)
	return item, true
}

type WorkerPool struct {
    
    
	workers []chan func()
	wg      sync.WaitGroup
}

func NewWorkerPool(nWorker int) *WorkerPool {
    
    
	pool := &WorkerPool{
    
    }
	pool.workers = make([]chan func(), nWorker)
	for i := range pool.workers {
    
    
		pool.workers[i] = make(chan func())
		pool.wg.Add(1)
		go pool.workerLoop(pool.workers[i])
	}
	return pool
}

func (p *WorkerPool) workerLoop(worker chan func()) {
    
    
	defer p.wg.Done()
	for task := range worker {
    
    
		task()
	}
}

func (p *WorkerPool) AddTask(task func()) {
    
    
	worker := p.workers[rand.Intn(len(p.workers))]
	worker <- task
}

func (p *WorkerPool) Close() {
    
    
	for i := range p.workers {
    
    
		close(p.workers[i])
	}
	p.wg.Wait()
}

func produce(queue *Queue, pool *WorkerPool, pid int) {
    
    
	for i := 1; i <= 3; i++ {
    
    
		success := queue.Push(i * pid)
		if !success {
    
    
			fmt.Printf("生产者%d生产失败\n", pid)
		} else {
    
    
			fmt.Printf("生产者%d生产%d\n", pid, i*pid)
			pool.AddTask(func() {
    
    
				time.Sleep(time.Millisecond * time.Duration(rand.Intn(3000)))
			})
		}
	}
}

func consume(queue *Queue, pool *WorkerPool, cid int) {
    
    
	for {
    
    
		item, ok := queue.Pop()
		if !ok {
    
    
			return
		}
		fmt.Printf("消费者%d消费%d\n", cid, item)
		pool.AddTask(func() {
    
    
			time.Sleep(time.Millisecond * time.Duration(rand.Intn(3000)))
		})
	}
}

func main() {
    
    
	queue := NewQueue(5)
	pool := NewWorkerPool(5)

	nProducer := 3
	for i := 1; i <= nProducer; i++ {
    
    
		go produce(queue, pool, i)
	}

	nConsumer := 2
	for i := 1; i <= nConsumer; i++ {
    
    
		go consume(queue, pool, i)
	}

	time.Sleep(time.Second * 10)
	pool.Close()
}

Guess you like

Origin blog.csdn.net/weixin_45565886/article/details/131153381