go language integration etcd

Introduction to etcd

etcd is an open source, highly available distributed key-value storage system developed in Go language, which can be used for configuration sharing and service registration and discovery. Similar projects include zookeeper and consul.

etcd has the following characteristics:

Full replication: the full archive is available to every node in the cluster
High availability: Etcd can be used to avoid single points of failure for hardware or network issues
Consistency: every read returns the latest write across multiple hosts
Simple: includes a Well-defined, user-facing API (gRPC)
Security: Automated TLS with optional client certificate authentication
Fast: Benchmark speed of 10,000 writes per second
Reliable: Strong consistency, high availability using Raft algorithm service storage directory for

etcd application scenarios

service discovery

What needs to be solved is also one of the most common problems in distributed systems, that is, how can processes or services in the same distributed cluster find each other and establish a connection. In essence, service discovery is to find out whether there are processes in the cluster listening to udp or tcp ports, and to find and connect by name.
insert image description here

configuration center

Put some configuration information on etcd for centralized management.

The usage of this type of scenario is usually like this: the application actively obtains configuration information from etcd once when it is started, and at the same time, registers a Watcher on the etcd node and waits for it. After each configuration update, etcd will notify subscribers in real time , so as to achieve the purpose of obtaining the latest configuration information.

distributed lock

Because etcd uses the Raft algorithm to maintain strong data consistency, the value stored in the cluster for a certain operation must be globally consistent, so it is easy to implement distributed locks. There are two ways to use the lock service, one is to maintain exclusive use, and the other is to control timing.

Keeping exclusive means that all users who acquire the lock can eventually get it . For this purpose, etcd provides a set of APIs to realize the distributed lock atomic operation CAS (CompareAndSwap). By setting the prevExist value, you can ensure that when multiple nodes create a certain directory at the same time, only one succeeds. A successfully created user can be considered to have acquired the lock.
Control timing, that is, all users who want to acquire locks will be scheduled for execution, but the order in which locks are acquired is also globally unique and determines the execution order . etcd also provides a set of APIs (automatically create ordered keys) for this purpose. When creating a value for a directory, specify it as a POST action, so that etcd will automatically generate a key with the current maximum value in the directory, and store this new value ( client number). At the same time, you can use the API to list all the key values ​​​​under the current directory in order. At this time, the values ​​of these keys are the timings of the clients, and the values ​​stored in these keys may be numbers representing the clients.
insert image description here

Why use etcd instead of ZooKeeper?

These functions implemented by etcd can be implemented by ZooKeeper. So why use etcd instead of ZooKeeper directly?

Why not choose ZooKeeper?

The deployment and maintenance are complex, and the Paxos strong consensus algorithm used is complex and difficult to understand. The official interface only provides two languages, Java and C.
Writing in Java introduces a lot of dependencies. It is more troublesome for operation and maintenance personnel to maintain.
The development has been slow in recent years, and it is not as good as rising stars such as etcd and consul.

Why choose etcd?

Simple. It is easy to write and deploy in Go language; it supports HTTP/JSON API and is easy to use; it uses Raft algorithm to ensure strong consistency and make it easy for users to understand.
By default, etcd persists data as soon as it is updated.
etcd supports SSL client security authentication.

Finally, as a young project, etcd is undergoing rapid iteration and development, which is both an advantage and a disadvantage. The advantage is that it has unlimited possibilities in the future, and the disadvantage is that it cannot be tested by large projects for a long time. However, currently well-known projects such as CoreOS, Kubernetes, and CloudFoundry all use etcd in production environments, so in general, etcd is worth trying.

etcd cluster

As a highly available key-value storage system, etcd is inherently designed for clustering. Since the Raft algorithm needs the votes of a majority of nodes when making decisions, etcd generally deploys clusters and recommends an odd number of nodes. The recommended number is 3, 5, or 7 nodes to form a cluster.
Specify cluster members on each etcd node. In order to distinguish different clusters, it is best to configure a unique token at the same time.

The following is the cluster information defined in advance, where n1, n2 and n3 represent three different etcd nodes.

TOKEN=token-01
CLUSTER_STATE=new
CLUSTER=n1=http://10.240.0.17:2380,n2=http://10.240.0.18:2380,n3=http://10.240.0.19:2380

Execute the following command on the n1 machine to start etcd:

etcd --data-dir=data.etcd --name n1 \
	--initial-advertise-peer-urls http://10.240.0.17:2380 --listen-peer-urls http://10.240.0.17:2380 \
	--advertise-client-urls http://10.240.0.17:2379 --listen-client-urls http://10.240.0.17:2379 \
	--initial-cluster ${
    
    CLUSTER} \
	--initial-cluster-state ${
    
    CLUSTER_STATE} --initial-cluster-token ${
    
    TOKEN}

Execute the following command on the n2 machine to start etcd:

etcd --data-dir=data.etcd --name n2 \
	--initial-advertise-peer-urls http://10.240.0.18:2380 --listen-peer-urls http://10.240.0.18:2380 \
	--advertise-client-urls http://10.240.0.18:2379 --listen-client-urls http://10.240.0.18:2379 \
	--initial-cluster ${
    
    CLUSTER} \
	--initial-cluster-state ${
    
    CLUSTER_STATE} --initial-cluster-token ${
    
    TOKEN}

Execute the following command on the n3 machine to start etcd:

etcd --data-dir=data.etcd --name n3 \
	--initial-advertise-peer-urls http://10.240.0.19:2380 --listen-peer-urls http://10.240.0.19:2380 \
	--advertise-client-urls http://10.240.0.19:2379 --listen-client-urls http://10.240.0.19:2379 \
	--initial-cluster ${
    
    CLUSTER} \
	--initial-cluster-state ${
    
    CLUSTER_STATE} --initial-cluster-token ${
    
    TOKEN}

The etcd official website provides an etcd storage address that can be accessed by the public network. You can get the etcd service directory with the following command and use it as the -discovery parameter.

curl https://discovery.etcd.io/new?size=3
https://discovery.etcd.io/a81b5818e67a6ea83e9d4daea5ecbc92

# grab this token
TOKEN=token-01
CLUSTER_STATE=new
DISCOVERY=https://discovery.etcd.io/a81b5818e67a6ea83e9d4daea5ecbc92


etcd --data-dir=data.etcd --name n1 \
	--initial-advertise-peer-urls http://10.240.0.17:2380 --listen-peer-urls http://10.240.0.17:2380 \
	--advertise-client-urls http://10.240.0.17:2379 --listen-client-urls http://10.240.0.17:2379 \
	--discovery ${
    
    DISCOVERY} \
	--initial-cluster-state ${
    
    CLUSTER_STATE} --initial-cluster-token ${
    
    TOKEN}


etcd --data-dir=data.etcd --name n2 \
	--initial-advertise-peer-urls http://10.240.0.18:2380 --listen-peer-urls http://10.240.0.18:2380 \
	--advertise-client-urls http://10.240.0.18:2379 --listen-client-urls http://10.240.0.18:2379 \
	--discovery ${
    
    DISCOVERY} \
	--initial-cluster-state ${
    
    CLUSTER_STATE} --initial-cluster-token ${
    
    TOKEN}


etcd --data-dir=data.etcd --name n3 \
	--initial-advertise-peer-urls http://10.240.0.19:2380 --listen-peer-urls http://10.240.0.19:2380 \
	--advertise-client-urls http://10.240.0.19:2379 --listen-client-urls http:/10.240.0.19:2379 \
	--discovery ${
    
    DISCOVERY} \
	--initial-cluster-state ${
    
    CLUSTER_STATE} --initial-cluster-token ${
    
    TOKEN}

At this point, the etcd cluster is set up, and you can use etcdctl to connect to etcd.

export ETCDCTL_API=3
HOST_1=10.240.0.17
HOST_2=10.240.0.18
HOST_3=10.240.0.19
ENDPOINTS=$HOST_1:2379,$HOST_2:2379,$HOST_3:2379

etcdctl --endpoints=$ENDPOINTS member list

Go language operation etcd

Here use the official etcd/clientv3 package to connect etcd and perform related operations.

Install

go get go.etcd.io/etcd/clientv3

Put and get operations
The put command is used to set the key-value pair data, and the get command is used to obtain the value according to the key.

package main

import (
	"context"
	"fmt"
	"time"

	"go.etcd.io/etcd/clientv3"
)

// etcd client put/get demo
// use etcd/clientv3

func main() {
    
    
	cli, err := clientv3.New(clientv3.Config{
    
    
		Endpoints:   []string{
    
    "127.0.0.1:2379"},
		DialTimeout: 5 * time.Second,
	})
	if err != nil {
    
    
		// handle error!
		fmt.Printf("connect to etcd failed, err:%v\n", err)
		return
	}
    fmt.Println("connect to etcd success")
	defer cli.Close()
	// put
	ctx, cancel := context.WithTimeout(context.Background(), time.Second)
	_, err = cli.Put(ctx, "q1mi", "dsb")
	cancel()
	if err != nil {
    
    
		fmt.Printf("put to etcd failed, err:%v\n", err)
		return
	}
	// get
	ctx, cancel = context.WithTimeout(context.Background(), time.Second)
	resp, err := cli.Get(ctx, "q1mi")
	cancel()
	if err != nil {
    
    
		fmt.Printf("get from etcd failed, err:%v\n", err)
		return
	}
	for _, ev := range resp.Kvs {
    
    
		fmt.Printf("%s:%s\n", ev.Key, ev.Value)
	}
}

watch operation

watch is used to get notifications of future changes.

package main

import (
	"context"
	"fmt"
	"time"

	"go.etcd.io/etcd/clientv3"
)

// watch demo

func main() {
    
    
	cli, err := clientv3.New(clientv3.Config{
    
    
		Endpoints:   []string{
    
    "127.0.0.1:2379"},
		DialTimeout: 5 * time.Second,
	})
	if err != nil {
    
    
		fmt.Printf("connect to etcd failed, err:%v\n", err)
		return
	}
	fmt.Println("connect to etcd success")
	defer cli.Close()
	// watch key:q1mi change
	rch := cli.Watch(context.Background(), "q1mi") // <-chan WatchResponse
	for wresp := range rch {
    
    
	//看是什么类型,是新增删除还是修改
		for _, ev := range wresp.Events {
    
    
			fmt.Printf("Type: %s Key:%s Value:%s\n", ev.Type, ev.Kv.Key, ev.Kv.Value)
		}
	}
}

Save, compile and execute the above code, and the program will wait for the change of the q1mi key in etcd.

For example: we open the terminal and execute the following commands to modify, delete, and set the q1mi key.

etcd> etcdctl.exe --endpoints=http://127.0.0.1:2379 put q1mi "dsb2"
OK

etcd> etcdctl.exe --endpoints=http://127.0.0.1:2379 del q1mi
1

etcd> etcdctl.exe --endpoints=http://127.0.0.1:2379 put q1mi "dsb3"
OK

The above programs can receive the following notifications.

watch>watch.exe
connect to etcd success
Type: PUT Key:q1mi Value:dsb2
Type: DELETE Key:q1mi Value:
Type: PUT Key:q1mi Value:dsb3

lease lease

package main

import (
	"fmt"
	"time"
)

// etcd lease

import (
	"context"
	"log"

	"go.etcd.io/etcd/clientv3"
)

func main() {
    
    
	cli, err := clientv3.New(clientv3.Config{
    
    
	//节点
		Endpoints:   []string{
    
    "127.0.0.1:2379"},
		DialTimeout: time.Second * 5,
	})
	if err != nil {
    
    
		log.Fatal(err)
	}
	fmt.Println("connect to etcd success.")
	defer cli.Close()

	// 创建一个5秒的租约
	resp, err := cli.Grant(context.TODO(), 5)
	if err != nil {
    
    
		log.Fatal(err)
	}

	// 5秒钟之后, /nazha/ 这个key就会被移除
	_, err = cli.Put(context.TODO(), "/nazha/", "dsb", clientv3.WithLease(resp.ID))
	if err != nil {
    
    
		log.Fatal(err)
	}
}

keepAlive

package main

import (
	"context"
	"fmt"
	"log"
	"time"

	"go.etcd.io/etcd/clientv3"
)

// etcd keepAlive

func main() {
    
    
	cli, err := clientv3.New(clientv3.Config{
    
    
		Endpoints:   []string{
    
    "127.0.0.1:2379"},
		DialTimeout: time.Second * 5,
	})
	if err != nil {
    
    
		log.Fatal(err)
	}
	fmt.Println("connect to etcd success.")
	defer cli.Close()

	resp, err := cli.Grant(context.TODO(), 5)
	if err != nil {
    
    
		log.Fatal(err)
	}

	_, err = cli.Put(context.TODO(), "/nazha/", "dsb", clientv3.WithLease(resp.ID))
	if err != nil {
    
    
		log.Fatal(err)
	}

	// the key 'foo' will be kept forever
	ch, kaerr := cli.KeepAlive(context.TODO(), resp.ID)
	if kaerr != nil {
    
    
		log.Fatal(kaerr)
	}
	for {
    
    
		ka := <-ch
		fmt.Println("ttl:", ka.TTL)
	}
}

Realize distributed lock based on etcd

go.etcd.io/etcd/clientv3/concurrency implements concurrent operations on top of etcd, such as distributed locks, barriers, and elections.

Import the package:

import "go.etcd.io/etcd/clientv3/concurrency"

An example of a distributed lock based on etcd:

cli, err := clientv3.New(clientv3.Config{
    
    Endpoints: endpoints})
if err != nil {
    
    
    log.Fatal(err)
}
defer cli.Close()

// 创建两个单独的会话用来演示锁竞争
s1, err := concurrency.NewSession(cli)
if err != nil {
    
    
    log.Fatal(err)
}
defer s1.Close()
m1 := concurrency.NewMutex(s1, "/my-lock/")

s2, err := concurrency.NewSession(cli)
if err != nil {
    
    
    log.Fatal(err)
}
defer s2.Close()
m2 := concurrency.NewMutex(s2, "/my-lock/")

// 会话s1获取锁
if err := m1.Lock(context.TODO()); err != nil {
    
    
    log.Fatal(err)
}
fmt.Println("acquired lock for s1")

m2Locked := make(chan struct{
    
    })
go func() {
    
    
    defer close(m2Locked)
    // 等待直到会话s1释放了/my-lock/的锁
    if err := m2.Lock(context.TODO()); err != nil {
    
    
        log.Fatal(err)
    }
}()

if err := m1.Unlock(context.TODO()); err != nil {
    
    
    log.Fatal(err)
}
fmt.Println("released lock for s1")
<-m2Locked
fmt.Println("acquired lock for s2")

output:

acquired lock for s1
released lock for s1
acquired lock for s2

Reference link:

https://etcd.io/docs/v3.3.12/demo/
https://www.infoq.cn/article/etcd-interpretation-application-scenario-implement-principle/

etcd collects logs

etcd initialization

1.etcd.go documents

var(cli*clientv3.Client)
type LogEntry struct {
    
    
	Path string `json:"path"`
	Topic string `json:"topic"`
}

//初始化ETCD的函数
func Init(addr string,timeout timeDuration)(err error){
    
    
cli,err=clientv3New(clientv3Config{
    
    
Endpoints:[]string{
    
    addr}, DialTimeout:timeout,})
if err != nil {
    
    
// handle error!
fmt.Printf(format: "connect to etcd failed, err:%v\n", err) 
return
}
return
}


//从ETCD中根据key获取配置项
func GetConf(key string)(logEntryConf[]*LogEntry,err error){
    
    
// gel
ctx,cancel:=context.WithTimeout(contextBackground(),timeSecond) 
resp,err:=cliGet(ctx,key) 
cancel()
if err != nil {
    
    
fmt.Printf(format: "get from etcd failed,err:%v\n",err) return}
for _,ev;= range resp.Kvs {
    
    
//fmt.Printf("%s:%s|n",ev.Key,ev.Value)
err =jsonUnmarshal(evValue,&logEntryConf)
 if err!=nil f{
    
    
	fmt.Printf( format: "unmarshal etcd value failed,err:%v\n",err) 
	return
}
return 

2. etcdPut file (the location where the configuration file is stored)

func main() {
    
    
	cli, err := clientv3.New(clientv3.Config{
    
    
		Endpoints:   []string{
    
    "127.0.0.1:2379"},
		DialTimeout: 5 * time.Second,
	})
	if err != nil {
    
    
		// handle error!
		fmt.Printf("connect to etcd failed, err:%v\n", err)
		return
	}
    fmt.Println("connect to etcd success")
	defer cli.Close()
	// put
	ctx, cancel := context.WithTimeout(context.Background(), time.Second)
	value := `[{"path”:"c:/tnp/nginx.log","topic”:"web_log"},{"path":"d:/xxx/redis.log","topic":"redis_log"}]`
	_, err = cli.Put(ctx, "q1mi", value)
	cancel()
	if err != nil {
    
    
		fmt.Printf("put to etcd failed, err:%v\n", err)
		return
	}
	}

3. main.go () part of the code

// 1. 初始化kafka连接I 
err =kafka.Init([]string{
    
    cfg.KafkaConfAddress}) 
if err != nil {
    
    
fmt.Printf(format: "init Kafka failed,err:%v\n",err) 
return
}
fmt.Printin( a.: "init kafka success.")
//2.初始化ETCD
//5*time.Second
err=etcd.Init(cfg.EtcdConfAddresstimeDuration(cfgEtcdConfTimeout)*time.Second )
if err != nil {
    
    
fmt.Printf(format: "init etcd failed,err:%v\n",err) 
return}
fmt Println( a.: "init etcd success.")//2.1从etcd中获取日志收集项的配置信息
logEntryConf,err=etcdGetConf(key:"/xxx")
//2.2拍一个哨兵去监视日志收集项的变化(有变化及时通知我的loqAgent实现热加载配置) 
if err != nil {
    
    
fmt.Printf( format: "etcd.GetConf failed,err:%v\n",err)
 return
 }
fmt.Println(a..: "get conf from etcd success, %v\n",logEntryConf)
fmt.Printf(format: "get conf from etcd success, %v\n",logEntryConf)
 for index, value := range logEntryConf{
    
    
	fmt.Printf( format: "index:%vvalue:%v\n"index,value)
	}

//2.打开日志文件准备收集日志
//err=taillog.Init(cfgTaillogConfFileName)
//if err!= nil {
    
    
// fmtPrintf(“Init taillog failederr:%v\n",err)
// return
//}
//fmt.Println("init taillog success.")
// 3.具体的业务//run()
	}

Guess you like

Origin blog.csdn.net/ydl1128/article/details/126342105