Go common features (embed, plug-in development), common packages, common fragments (concurrency)

Go common features (embed, plug-in development), common packages, common fragments (concurrency)

1 Common features

1.1 go:build

//go:build !windows
//go:build是前缀指令,!windows是逻辑判断的条件。这个指令的作用是在Windows系统外,编译当前源文件。

// +build !windows
// +build是前缀指令,!windows是编译标记。这个指令的作用是告诉编译器只有当编译标记中不包含 windows时,才会编译当前源文件。

综合上述两个指令的作用,只有在非Windows系统下编译这个源文件时,会将其编译进目标可执行文件中。

  • //go:build 386 && windows
  • // +build 386,windows
    作用:只有当操作系统为windows,同时arch架构为386时,才编译当前文件

1.2 go:embed

//go:embed command is an official command added in Go 1.16 version, which can be used to embed files in executable files.

There are three forms of instruction format:

  1. //go:embed path…
  2. //go:embed regexp
  3. //go:embed dir/*.ext

in:

  • path... is the file or directory that needs to be embedded, it can be multiple, separated by spaces.
  • regexp is the regular expression for the filename or directoryname that needs to be embedded.
  • dir/*.ext is a file with a specific extension in a certain directory that needs to be embedded.

example

Suppose we have a file called data.txt, and then we want to reference it in the program, it can be embedded through the //go:embed command.

package main

import (
    "embed"
    "fmt"
)

//go:embed data.txt
var data string

func main() {
    
    
    fmt.Println(data)
}

In this example, we use //go:embed data.txt to embed the data.txt file into the executable file, assign it as a string to var data string, and then output data in the main() function the value of the variable.

1.3 Others

  • go:noinline: prohibits the compiler from inlining, even if the -l flag is enabled.
  • go:noescape: Tell the compiler that a function or method does not have any pointers to escape to the outside, which can be better optimized.
  • go:linkname: Used to call unexported functions across packages.
  • go:cgo_export_static, //go:cgo_export_dynamic, //go:cgo_import_static, //go:cgo_import_dynamic: used to exchange data between Go and C languages.

1.4 Plug-in development

Golang officially provides the plugin module, which can support plug-in development.

At present, many ideas are to support plug-ins during the development process. After the main program is written, the plug-in cannot be temporarily bound. But this article will lead you to automatically identify and load the main program, and control the call of the plug-in.

1 Basic idea

In plug-in development, there must be a main program that controls, processes, and schedules other plug-ins.

1.1 Basic business

  • We first develop a simple business program for two types of output.
  1. When the time seconds are odd, output hello
  2. When the number of seconds is even, output world

Main body code, MainFile.go:

package main

import (
	"fmt"

	"time"
)

// init 函数将于 main 函数之前运行
func init() {
    
    
	fmt.Println("Process On ==========")
}

func main() {
    
    
	// time.Now().Second 将会返回当前秒数
	nowSecond := time.Now().Second()
	doPrint(nowSecond)
	fmt.Println("Process Stop ========")
}

// 执行打印操作
func doPrint(nowSecond int) {
    
    

	if nowSecond%2 == 0 {
    
    
		printWorld() //偶数
	} else {
    
    
		printHello() //奇数
	}
}

func printHello() {
    
    
	fmt.Println("hello")
}

func printWorld() {
    
    
	fmt.Println("world")
}

The code has certain redundancy, which is to simulate the scheduling between businesses

Run the code:
insert image description here

1.2 Write a simple plug-in

Then we write a plug-in code, 插件代码的入口package也要为mainbut may not contain the main method

  • Set the plug-in logic to output the current time at the same time when the current second is an odd number (it is not the same time as the judgment of hello)
  • Plugin file name: HelloPlugin.go

In the current directory, execute the plugin generation command:

注意:buildmode=plugin only supports 64-bit builds under Linux and Mac systems, and does not support 64-bit builds under Windows systems.

// mac或者linux系统
go build --buildmode=plugin -o HelloPlugin.so HelloPlugin.go

There will be an extra file HelloPlugin.so in the current directory, and then, let the main program load the plugin

package main

import (
	"fmt"
	"plugin"
	"time"
)

// 定义插件信息
const pluginFile = "HelloPlugin.so"

// 存储插件中将要被调用的方法或变量
var pluginFunc plugin.Symbol

// init 函数将于 main 函数之前运行
func init() {
    
    

	// 查找插件文件
	pluginFile, err := plugin.Open(pluginFile)

	if err != nil {
    
    
		fmt.Println("An error occurred while opening the plug-in")
	} else {
    
    
		// 查找目标函数
		targetFunc, err := pluginFile.Lookup("PrintNowTime")
		if err != nil {
    
    
			fmt.Println("An error occurred while search target func")
		}

		pluginFunc = targetFunc
	}

	fmt.Println("Process On ==========")
}


func main() {
    
    
	// time.Now().Second 将会返回当前秒数
	nowSecond := time.Now().Second()
	doPrint(nowSecond)
	fmt.Println("Process Stop ========")
}

func doPrint(nowSecond int) {
    
    
	if nowSecond%2 == 0 {
    
    
		printWorld() //偶数
	} else {
    
    
		printHello() //奇数
	}
}

func printHello() {
    
    
	// 执行插件调用
	if pluginFunc != nil {
    
    
		//将存储的信息转换为函数
		if targetFunc, ok := pluginFunc.(func()); ok {
    
    
			targetFunc()
		}
	}
	fmt.Println("hello")
}

func printWorld() {
    
    
	fmt.Println("world")
}

Run the code:
insert image description here

2 common bag

2.1 Standard library

Document address: http://doc.golang.ltd/

①os module

1 File directory related
//【1】创建文件
file, err := os.Create("file.txt")

//【2】创建目录
err := os.Mkdir("test2", os.ModePerm) //单个目录
err := os.MkdirAll("/a/b/c", os.ModePerm) //层级目录

//【3】删除文件或目录
err := os.Remove("test.txt")
err = os.RemoveAll("test2")

//【4】获取工作目录
dir, err := os.Getwd()

//【5】修改工作目录
err := os.Chdir("d:/")

//【6】读写文件
bytes, err := os.ReadFile("test2.txt")
os.WriteFile("test2.txt", []byte("hello go"), os.ModePerm)

//【7】文件重命名
err := os.Rename("test2.txt", "test3.txt")

//【8】读取目录列表

2 File file read operation
//【1】打开文件
file, err := os.Open("a.txt) //如果a.txt不存在,则报错[打开的文件为只读]
file, err := os.OpenFile("a1.txt", os.O_RDWR|os.OCREATE, 755) //如果不存在则创建

//【2】循环读取文件
f, _ := os.Open("a.txt")
for {
    
    
	buf := make([]byte, 3)
	n, err := f.Read(buf)
	//读到文件末尾
	if err == io.EOF {
    
    
		break
	}
	fmt.Printf("n:%v\n", n)
	fmt.Printf("string(buf):%v\n", string(buf))
}
f.Close()

//【3】从指定位置读取
方法一:
f, _ := os.Open("a.txt")
buf := make([]byte, 4)
//从offset为3的位置开始读取
n, _ := f.ReadAt(buf, 3)
fmt.Println("n=", n)
fmt.Println("string(buf)=", string(buf))

方法二:
file, _ := os.Open("test/a.txt")
defer file.Close()
//从偏移量为3的位置开始读取
file.Seek(3, 0)
buf := make([]byte, 10)
n, _ := file.Read(buf)
fmt.Println("n=", n)
fmt.Println("string(buf)=", string(buf))

//【4】读取目录
dir, _ := os.ReadDir("a/")
for _, v := range dir {
    
    
	fmt.Printf("v.IsDir():%v\n", v.IsDir())
	fmt.Printf("v.Name():%v\n", v.Name())
}

3 File file write operation
//os.O_TRUNC //覆盖之前的
//os.O_APPEND //追加写

//【1】写入字节
file, _ := os.OpenFile("a.txt", os.O_RDWR|os.O_APPEND, 0775)
file.Write([]byte("hello golang"))
file.Close()

//【2】写入字符串
file.WriteString("hello java")

//【3】从指定位置写
//从file的offset为3的位置开始写
file.WriteAt([]byte("aaa"), 3)
4 Process related operations
package main

import (
	"fmt"
	"os"
	"time"
)

func main() {
    
    
	//获取当前正在运行的进程id
	fmt.Printf("os.Getpid():%v\n", os.Getpid())
	//父id
	fmt.Printf("os.Getppid():%v\n", os.Getppid())

	//设置新进程的属性
	attr := &os.ProcAttr{
    
    
		//files指定新进程继承的活动文件对象
		//前三个分别为:标准输入、标准输出、标准错误输出
		Files: []*os.File{
    
    os.Stdin, os.Stdout, os.Stderr},
		//新进程的环境变量
		Env: os.Environ(),
	}

	//开始一个新进程
	p, err := os.StartProcess("D:\\Download\\EditPlus\\EditPlus.exe", []string{
    
    "D:\\Download\\EditPlus\\EditPlus.exe", "E:\\processDemo.txt"}, attr)
	if err != nil {
    
    
		fmt.Println(err)
	}
	fmt.Println(p)
	fmt.Println("进程ID:", p.Pid)
	//通过进程id查找进程
	p2, _ := os.FindProcess(p.Pid)
	fmt.Println(p2)
	//等待5s,执行函数
	time.AfterFunc(time.Second*5, func() {
    
    
		//向进程p发送退出信号【杀死进程】
		p.Signal(os.Kill)
	})
	//等待进程p的退出,返回进程状态
	ps, _ := p.Wait()
	fmt.Println(ps.String())

}

operation result:

os.Getpid()19828
os.Getppid():22092
&{
    
    20312 372 0 {
    
    {
    
    0 0} 0 0 0 0}}
进程ID: 20312                
&{
    
    20312 352 0 {
    
    {
    
    0 0} 0 0 0 0}}
exit status 1
5 environment variables
//【1】获取和设置
//获取所有环境变量
s := os.Environ()
fmt.Printf("s:%v\n", s)
//获取某个环境变量
s2 := os.Getenv("GOPATH")
fmt.Printf("s2:%v\n", s2)
//获取不存在的环境变量[获取的结果为空,不会报错;如果要看环境变量是否存在;推荐使用LookupEnv]
s2 = os.Getenv("lalala")
//设置环境变量
os.Setenv("env1", "env1Value")

//【2】查找
s3, b := os.LookupEnv("env1")
if b {
    
    
	fmt.Println("s3=", s3)
}

//【3】清空环境变量,慎用!!!!
//os.Clearenv()

②io package, ioutil package, bufio

1 i包

Which types implement the Reader and Writer interfaces:

  • os.file
  • string.Reader
  • bufio.Reader
  • bytes.Buffer
  • bytes.Reader
  • compress/gzip.Reader/Writer
  • encoding/csv.Reader/Writer

insert image description here
Simple test case:

func main() {
    
    
	r := strings.NewReader("hello world")
	//os.Stdout返回的也是一个Writer
	_, err := io.Copy(os.Stdout, r)
	if err != nil {
    
    
		fmt.Println(err)
	}
	//控制台输出:hello world
}
2 iotuil package
name effect
ReadAll Read data and return the read bytes
ReadDir Read a directory and return the directory entry array []os.FileInfo
ReadFile Read a file and return the file content (byte slice)
WriteFile According to the file path, write byte slice
TempDir Create a temporary directory with the specified prefix in a directory, and return the path of the new temporary directory
TempFile Create a temporary file with the specified prefix in a directory, return os.File

Simple case:

func main() {
    
    
	fi, _ := ioutil.ReadDir(".")
	for _, v := range fi {
    
    
		if v.IsDir() {
    
    
			fmt.Println("dir=", v.Name())
		} else {
    
    
			fmt.Println("file=", v.Name())
		}
	}
}
3 bufio packets

Involving other languages, such as: Chinese, directly use rune to convert

Reader operation:

func main() {
    
    
	file, _ := os.Open("test/test1/test2/test.csv")
	defer file.Close()
	br := bufio.NewReader(file)
	buffer := make([]byte, 10)
	for {
    
    
		n, err := br.Read(buffer)
		if err == io.EOF {
    
    
			break
		} else {
    
    
			fmt.Println("value=", string(buffer[:n]))
		}
	}
}
  • Writer operation
func main() {
    
    
	//写入文件的话,需要使用OpenFile,并设置对应写入权限
	file, _ := os.OpenFile("test/test1/test2/test.csv", os.O_RDWR, 0777)
	defer file.Close()
	w := bufio.NewWriter(file)
	w.Write([]byte("hahaha~~~"))
	w.Flush()
}
  • Scanner
func main() {
    
    
	s := strings.NewReader("ABC DEF KIS")
	bs := bufio.NewScanner(s)
	//以空格作为分隔符
	bs.Split(bufio.ScanWords)
	for bs.Scan() {
    
    
		fmt.Println(bs.Text())
	}
}

③path/filepath

  1. Rel
func Rel(basepath, targpath string) (string, error)

This function takes basepath as the benchmark and returns the relative path of targpath relative to basepath. That is to say, if basepath is /a and targpath is /a/b/c, then /b/c will be returned. If one of the two parameters is Absolute path, one is a relative path, an error will be returned

  1. Join
func Join(elem ...string) string

The Join function connects multiple paths, performs a Clean operation, and then returns

  1. filepath
files := "E:\\data\\test.txt"
paths, fileName := filepath.Split(files)
fmt.Println(paths, fileName)      //获取路径中的目录及文件名 E:\data\  test.txt
fmt.Println(filepath.Base(files)) //获取路径中的文件名test.txt
fmt.Println(path.Ext(files))      //获取路径中的文件的后缀 .txt

④ archive/zip package

  1. compression
package main

import (
	"archive/zip"
	"fmt"
	"io"
	"os"
	"path/filepath"
	"strings"
)

func compressionDir(baseDir string) (string, error) {
    
    
	zipFileName := baseDir + ".zip"

	// 创建一个新的 zip 文件
	zipFile, err := os.Create(zipFileName)
	if err != nil {
    
    
		return "", err
	}
	defer zipFile.Close()

	// 创建一个 zip.Writer
	zipWriter := zip.NewWriter(zipFile)
	defer zipWriter.Close()

	// 遍历目录下的所有文件和子目录
	err = filepath.Walk(baseDir, func(path string, info os.FileInfo, err error) error {
    
    
		if err != nil {
    
    
			return err
		}

		// 创建一个 zip 文件中的文件或目录
		relativePath := strings.TrimPrefix(path, baseDir)
		zipPath := strings.TrimLeft(filepath.Join("/", relativePath), "/")

		// 如果是目录或空目录,则在 zip 文件中创建一个目录
		if info.IsDir() || isEmptyDir(path) {
    
    
			_, err := zipWriter.Create(zipPath + "/")
			if err != nil {
    
    
				return err
			}
		} else {
    
    
			// 如果是文件,则创建一个 zip 文件中的文件
			zipFile, err := zipWriter.Create(zipPath)
			if err != nil {
    
    
				return err
			}

			// 打开原始文件
			file, err := os.Open(path)
			if err != nil {
    
    
				return err
			}
			defer file.Close()

			// 将原始文件的内容拷贝到 zip 文件中
			_, err = io.Copy(zipFile, file)
			if err != nil {
    
    
				return err
			}
		}

		return nil
	})

	if err != nil {
    
    
		return "", err
	}

	return zipFileName, nil
}

// 判断目录是否为空目录
func isEmptyDir(dirPath string) bool {
    
    
	dir, err := os.Open(dirPath)
	if err != nil {
    
    
		return false
	}
	defer dir.Close()

	_, err = dir.Readdirnames(1)
	return err == io.EOF
}

func main() {
    
    
	// 调用压缩函数
	zipFile, err := compressionDir("E:\\Go\\GoPro\\src\\go_code\\gouitest\\test")
	if err != nil {
    
    
		fmt.Println("压缩目录失败:", err)
		return
	}

	fmt.Println("目录压缩成功,压缩文件:", zipFile)
}

operation result:
insert image description here

  1. decompress

2.2 Concurrency

①Mutex

- 互斥锁
- 不可重入锁

sync.Mutexis a synchronization primitive in the Go language, used to implement mutual exclusion locks. It can ensure that only one goroutine can access shared resources at the same time, thus avoiding race conditions and data competition.

The usage of sync.Mutex is very simple. We can acquire the lock by calling the Lock method, access the shared resource after acquiring the lock, and then Unlockrelease the lock by calling the method. If the lock is already acquired by another goroutine before acquiring the lock, the current goroutine will be blocked until it is released.

Now we use sync.Mutex to ensure concurrent access to resources

  • If not using sync.Mutex
package main

import (
	"fmt"
	"sync"
	"time"
)

var (
	i  = 100
	wg sync.WaitGroup
)

func main() {
    
    

	for i := 0; i < 50; i++ {
    
    
		wg.Add(1)
		go add()
		wg.Add(1)
		go sub()
	}
	//通过waitGroup等待所有任务跑完
	wg.Wait()
	fmt.Println("main....i=", i)
}

func add() {
    
    
	time.Sleep(time.Millisecond * 10)
	defer wg.Done()
	i += 1
	fmt.Println("i++, i=", i)
}

func sub() {
    
    
	time.Sleep(time.Millisecond * 2)
	defer wg.Done()
	i -= 1
	fmt.Println("i--, i=", i)
}

insert image description here

The correct result should be i=100, so we can know that the result is wrong because we did not control concurrency. Next, we control concurrency through sync.Mutex mutex

  • Use sync.Mutex to control concurrency
package main

import (
	"fmt"
	"sync"
	"time"
)

var (
	i    = 100
	wg   sync.WaitGroup
	lock sync.Mutex
)

func main() {
    
    

	for i := 0; i < 50; i++ {
    
    
		wg.Add(1)
		go add()
		wg.Add(1)
		go sub()
	}
	//通过waitGroup等待所有任务跑完
	wg.Wait()
	fmt.Println("main....i=", i)
}

func add() {
    
    
	defer wg.Done()
	//访问共享资源的时候加锁
	lock.Lock()
	time.Sleep(time.Millisecond * 10)
	i += 1
	fmt.Println("i++, i=", i)
	lock.Unlock()
}

func sub() {
    
    
	defer wg.Done()
	lock.Lock()
	time.Sleep(time.Millisecond * 2)
	i -= 1
	fmt.Println("i--, i=", i)
	lock.Unlock()
}

Now, no matter how we run it, we can get the correct result

For the question of whether sync.Mutex can be unlocked multiple times, the answer is no. Calling the unlock method without acquiring the lock, or calling the unlock method again after the lock has already been released, will result in a runtime error. Therefore, when using sync.Mutexit, it is necessary to ensure that the lock can be released in time after each lock acquisition to avoid such problems.

package main

import "sync"

var (
	mutex sync.Mutex
)

func main() {
    
    
	/*
		【1】加锁一次,解锁两次=》报错
		mutex.Lock()
		mutex.Unlock()
		mutex.Unlock()
		//fatal error: sync: unlock of unlocked mutex
	*/

	/*
		【2】加锁两次,解锁一次=》报错
		mutex.Lock()
		mutex.Lock()
		mutex.Unlock()
		//fatal error: all goroutines are asleep - deadlock!
	*/
	/*
		【3】先连续加锁两次,然后连续解锁两次=》报错 `sync.Mutex是不可重入锁`
		mutex.Lock()
		mutex.Lock()
		mutex.Unlock()
		mutex.Unlock()
		//fatal error: all goroutines are asleep - deadlock!
	*/

	// 【4】可行,只有先加锁,然后释放锁之后才能继续加锁
	mutex.Lock()
	mutex.Unlock()
	mutex.Lock()
	mutex.Unlock()
}

  • expand:可重入锁
  • When a thread acquires a lock, if no other thread owns the lock, then the thread successfully acquires the lock. After that, if other threads request this lock again, they will be in a blocked waiting state. However, if the thread that owns the lock requests the lock again, it will not be blocked, but will return successfully, so it is called a reentrant lock. As long as you have this lock, you can call it vigorously, such as implementing some algorithms through recursion, and the caller will not block or deadlock.
  • Mutex is not a reentrant lock. The implementation of Mutex does not record which goroutine owns the lock. In theory, any goroutine can unlock the lock at will, so there is no way to calculate the re-entry condition.
package main

import (
	"fmt"
	"sync"
	"time"
)

var (
	mutex sync.Mutex
	count int
)

func main() {
    
    
	go func() {
    
    
		mutex.Lock()
		count++
	}()
	time.Sleep(time.Second * 1)
	mutex.Unlock()
	fmt.Println("main.....")
	fmt.Println("count=", count)
	//main.....
	//count= 1
}

You can see that the lock we added through a coroutine (goroutine), but it can be released in the main method (the main method can also be regarded as a special goroutine), so you can know that sync.Mutex will not record which goroutine holds this lock.

②WaitGroup

The two coroutines wait for each other. If there is no waitGroup, the task in the coroutine may not be completed, and the main program will exit, causing all the coroutines to exit as well.

package main

import (
	"fmt"
	"sync"
)

var (
	wg sync.WaitGroup
)

func hello(i int) {
    
    
	defer wg.Done() //wd.Add(-1)
	fmt.Println("hello", i)
	//wg.Done() //wd.Add(-1)
}

func main() {
    
    
	for i := 0; i < 10; i++ {
    
    
		wg.Add(1)
		go hello(i)
	}

	//等待所有协程中的任务全部完成
	wg.Wait()
	fmt.Println("main====")
}

③Timer、Ticker

1 Timer

Timer, timer.C is essentially a pipeline

package main

import (
	"fmt"
	"time"
)

func main() {
    
    
	//[1]time.NewTimer 等待两秒钟
	//timer := time.NewTimer(time.Second * 2)
	//t := <-timer.C
	//fmt.Println(t)

	//[2]time.After(time.Second * 2) 等待两秒钟
	//time.After(time.Second * 2)

	//[3]
	timer := time.NewTimer(time.Second * 5)
	timer.Reset(time.Second * 6) //重新设置定时器时间
	//<-timer.C
	timer.Stop() //停止定时器(如果没有timer.C 那么就不会阻塞暂停)
	fmt.Println("--")
}

2 Ticker

Timer is only executed once, Ticker can be executed periodically

Case number one:

package main

import (
	"fmt"
	"time"
)

func main() {
    
    
	fmt.Println("start...")
	//定时器,每隔5秒执行
	ticker := time.NewTicker(time.Second * 5)
	for _ = range ticker.C {
    
    
		fmt.Println("middle...")
	}
	fmt.Println("end...") //永远不会执行到,因为ticker.C是可以定时器,for中没有break
}

Case two:

package main

import (
	"fmt"
	"time"
)

func main() {
    
    
	//定时器,每隔一秒执行
	ticker := time.NewTicker(time.Second)

	chanInt := make(chan int)
	go func() {
    
    
		//ticker.C触发
		for _ = range ticker.C {
    
    
			select {
    
    
			case chanInt <- 1:
			case chanInt <- 2:
			case chanInt <- 3:
			}
		}
	}()

	sum := 0
	for v := range chanInt {
    
    
		fmt.Println("接收到:", v)
		sum += v
		if sum >= 10 {
    
    
			break
		}
	}
}

operation result:

接收到: 2
接收到: 2
接收到: 1
接收到: 3
接收到: 1
接收到: 2

④runtime

Give up CPU time slices and reschedule tasks

  • runtime.Gosched(): Give up the time slice and let other coroutines execute
  • runtime.Goexit(): Exit the current coroutine directly
  • runtime.NumCPU(): Get the number of cpu cores in the current system
  • runtime.GOMAXPROCS(num int): Set the number of cpu cores in the current system (the default is the maximum number of cpu cores in the current system)

⑤Atomic variables

When operating resources concurrently, we can ensure that the data is correct in two ways:

  1. lock
  2. atomic operation

The common operations of atomic are:

  • increase or decrease
  • load read
  • compare and exchange cas
  • exchange
  • storage write
package main

import (
	"fmt"
	"sync/atomic"
)

func main() {
    
    
	//test_add_sub()
	//test_load_store()
	test_cas()
	//除了cas比较并交换,atomic还有比较暴力的`直接交换`,但是这种用法比较少
}

func test_add_sub() {
    
    
	var i int32 = 100
	atomic.AddInt32(&i, 1)
	fmt.Println("i=", i)
	atomic.AddInt32(&i, -1)
	fmt.Println("i=", i)
}

func test_load_store() {
    
    
	var j int64 = 200
	val := atomic.LoadInt64(&j) //read
	fmt.Println("val=", val)
	atomic.StoreInt64(&j, -100) //write
	fmt.Println("j=", j)
}

func test_cas() {
    
    
	var k int32 = 8
	f := atomic.CompareAndSwapInt32(&k, 8, 100)
	fmt.Println("flag=", f)
	fmt.Println("k=", k)
}

2.3 Operating the database

Taking MySQL as an example, you first need to install the MySQL8 version, and then import the dependency library for go to operate MySQL

  • Address: https://pkg.go.dev/github.com/go-sql-driver/mysql#section-readme
  • It can also be downloaded directly by executing go get on the terminal
    go get -u github.com/go-sql-driver/mysql
package main

import (
	"database/sql"
	"fmt"
	_ "github.com/go-sql-driver/mysql"
)

var db *sql.DB

func initDB() (err error) {
    
    
	dataSource := "root:200151@tcp(127.0.0.1:3306)/test?charset=utf8mb4&parseTime=true"
	//不会校验账号密码是否正确
	//注意!!不要使用:=,因为我们这里是给全局变量赋值,然后在main函数中使用全局变量db
	db, err = sql.Open("mysql", dataSource)
	if err != nil {
    
    
		return err
	}
	//尝试与数据库建立连接(校验dataSource是否正确)
	err = db.Ping()
	if err != nil {
    
    
		return err
	}
	return nil
}

func main() {
    
    
	err := initDB()
	if err != nil {
    
    
		fmt.Println("initDB fail, err=", err)
	} else {
    
    
		fmt.Println("连接成功!")
	}
	//以插入操作为例【其他操作类似】
	s := "insert into account (name, money, question) values (?, ?, ?)"
	result, err := db.Exec(s, "ziyi", 300.0, "what")
	if err != nil {
    
    
		fmt.Println("insert fail err=", err)
	} else {
    
    
		//insert success  {0xc000102000 0xc00009ea00}
		fmt.Println("insert success ", result)
	}
}

3 common snippets

3.1 Control the number of goroutines (control the number of concurrency)

Control concurrency to prevent memory explosion

Simple implementation:

via semaphore

package main

import (
	"context"
	"fmt"
	"sync"
	"time"

	"golang.org/x/sync/semaphore"
)

func main() {
    
    
	// 最大并发数
	maxConcurrency := 20
	// 创建一个信号量
	sem := semaphore.NewWeighted(int64(maxConcurrency))
	// 创建一个等待组
	wg := sync.WaitGroup{
    
    }
	// 创建一个上下文
	ctx := context.TODO()
	// 待消费的任务数量
	taskCount := 100
	// 模拟100个任务
	for i := 0; i < taskCount; i++ {
    
    
		// 获取一个信号量令牌
		sem.Acquire(ctx, 1)
		// 增加等待组计数
		wg.Add(1)
		// 启动一个 goroutine 消费任务
		go func(taskID int) {
    
    
			// 在 goroutine 结束时释放信号量令牌
			defer sem.Release(1)
			// 模拟任务的耗时
			time.Sleep(time.Second)
			// 打印任务完成信息
			fmt.Printf("Task %d completed\n", taskID)
			// 减少等待组计数
			wg.Done()
		}(i)
	}

	// 等待所有任务完成
	wg.Wait()
	fmt.Println("All tasks completed")
}

through a single pipeline

package main

import (
	"fmt"
	"sync"
	"time"
)

func main() {
    
    
	maxGoroutines := 500
	taskSize := 60
	var sem = make(chan struct{
    
    }, maxGoroutines)
	var wg sync.WaitGroup

	for i := 0; i < taskSize; i++ {
    
    
		wg.Add(1)
		go func(id int) {
    
    
			work(id, sem, &wg)
		}(i)
	}
	wg.Wait() // 等待所有协程执行完毕
	close(sem) 
}

func work(id int, sem chan struct{
    
    }, wg *sync.WaitGroup) {
    
    
	sem <- struct{
    
    }{
    
    } // 信号量加一
	fmt.Println("id=", id)
	time.Sleep(time.Millisecond * 500)
	defer func() {
    
    
		<-sem
		wg.Done()
	}()
}

Through multiple pipelines:

package main

import (
	"fmt"
	"sync"
	"time"
)

func worker(id int, wg *sync.WaitGroup, sem chan struct{
    
    }) {
    
    
	defer wg.Done()
	fmt.Printf("Worker %d starting\n", id)
	// 模拟工作...
	time.Sleep(time.Millisecond * 500)
	fmt.Printf("Worker %d finished\n", id)
	// 释放信号量
	<-sem
}

func main() {
    
    
	// 控制 Goroutine 数量的上限
	maxWorkers := 2
	// 创建等待组和信号量
	var wg sync.WaitGroup
	sem := make(chan struct{
    
    }, maxWorkers)
	// 创建一些任务
	tasks := []int{
    
    1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
	// 创建任务通道
	taskChan := make(chan int)
	// 启动 Goroutine 来执行任务
	go func() {
    
    
		for task := range taskChan {
    
    
			// 等待空闲的信号量
			sem <- struct{
    
    }{
    
    }
			// 增加等待组计数
			wg.Add(1)
			// 启动 Goroutine
			go worker(task, &wg, sem)
		}
	}()
	// 将任务发送到任务通道
	for _, task := range tasks {
    
    
		taskChan <- task
	}
	// 关闭任务通道
	close(taskChan)
	// 等待所有 Goroutine 结束
	wg.Wait()
}

Method 2: Through semaphore semaphore

package main

import (
	"context"
	"fmt"
	"sync"
	"time"

	"github.com/marusama/semaphore"
)

func main() {
    
    
	// 控制并发数的上限
	maxWorkers := 10
	// 创建信号量
	sem := semaphore.New(maxWorkers)
	// 创建等待组
	var wg sync.WaitGroup
	// 创建一些任务
	tasks := []int{
    
    1, 2, 3, 4, 5, 6, 7, 8, 9, 10}

	// 遍历任务
	for _, task := range tasks {
    
    
		// 获取信号量
		sem.Acquire(context.TODO(), 1)

		// 增加等待组计数
		wg.Add(1)
		// 启动 Goroutine
		go worker(task, sem, &wg)
	}
	// 等待所有 Goroutine 结束
	wg.Wait()
}

func worker(id int, sem semaphore.Semaphore, wg *sync.WaitGroup) {
    
    
	defer wg.Done()
	fmt.Printf("Worker %d starting\n", id)
	// 模拟工作
	time.Sleep(time.Millisecond * 500)
	fmt.Printf("Worker %d finished\n", id)
	// 释放信号量
	sem.Release(1)
}

Method 3:

Create coroutines flexibly according to the number of tasks, and do not exceed the defined maximum number of coroutines

  • Dispatch tasks through coroutines
package main

import (
	"context"
	"fmt"
	"github.com/aobco/log"
	"sync"
	"sync/atomic"
	"time"
)

var (
	num        int32 = 0
	limit      int32 = 300
	wg               = new(sync.WaitGroup)
	maxWorkers       = 50
	numMutex   sync.Mutex
)

func main() {
    
    
	executeTask()
	fmt.Println("main=========")
}

func executeTask() {
    
    
	ctx, cancelFunc := context.WithCancel(context.Background())
	//创建任务
	taskChan := make(chan int)
	go func() {
    
    
		for {
    
    
			if num > limit {
    
    
				cancelFunc()
				log.Infof("cancel func....num %v limit %v", num, limit)
				break
			}
			time.Sleep(time.Millisecond * 100) // 在检查取消信号之前添加适当的延迟
		}
	}()
	for i := 0; i < maxWorkers; i++ {
    
    
		wg.Add(1)
		go func(i int) {
    
    
			work(i, ctx, taskChan)
			wg.Done()
		}(i)
	}
	go assignTask(taskChan)
	wg.Wait()
	//time.Sleep(time.Second * 10)
	//close(taskChan) // 关闭任务通道以通知工作协程退出
	fmt.Print("all done\n")
}

func assignTask(taskChan chan int) {
    
    
	defer close(taskChan)
	for i := 0; i < 20; i++ {
    
    
		taskChan <- i
		//time.Sleep(time.Millisecond * 50)
	}

}

func work(id int, ctx context.Context, taskChan chan int) {
    
    
	for {
    
    
		select {
    
    
		case <-ctx.Done():
			log.Infof("%v received the signal...", id)
			// 这里可以执行其他清理操作
			return // 确保工作协程退出
		case task, ok := <-taskChan:
			if !ok {
    
    
				log.Infof("%v task channel closed...", id)
				return // 任务通道被关闭,退出工作协程
			}
			time.Sleep(time.Millisecond * 200)
			numMutex.Lock()
			atomic.AddInt32(&num, int32(task))
			numMutex.Unlock()
			log.Infof("%v is working...task %v", id, task)
		}
	}
}

Tutorial: https://duoke360.com/tutorial/golang

Guess you like

Origin blog.csdn.net/weixin_45565886/article/details/131183034