redis client go language

After redis3.0 provides a new HA solutions, i.e. Cluster mode, trunked mode of a plurality of nodes. Crc16 algorithm based on the master key verification performed between the clusters, the value obtained modulo 16384, the hash is a key slot (channel) values, each node stores a respective hash value of a portion of the slot, between the master node based on asynchronous synchronous data replication.

Based on the principle redis clusters, gedis need to provide some capability areas:

1, a unified client Cluster;

2, to achieve the cluster connection pool;

3 health check, the cluster nodes (a subsequent implementation);

4, load balancing mechanisms to achieve;

5, the upper layer of the encapsulation protocol guarantees transparency.

The basic model is designed as follows:

 

Basic model definition

/ **
* node
* master: the master node ip + port
* slaves: ip + port from the set of nodes
* /
type the Node struct {
the Url String
Pwd String
InitActive int
}

type struct {The ClusterConfig
Nodes [] * the Node
HeartBeatInterval int
}

/ **
cluster client *
* heartBeatInterval heartbeat interval, S
* clusterPool Key: connection string value: connection pool
* /
type struct {the cluster
config * The ClusterConfig
clusterPool Map [String] * ConnPool
}
 the cluster initialization

/ **
* Initialization Client the Cluster
* /
FUNC NEWCLUSTER (ClusterConfig The ClusterConfig) * {the Cluster
Nodes: = clusterConfig.Nodes

var Cluster the Cluster
clusterPool: = the make (Map [String] * ConnPool)

for _, Node: Range = Nodes {
var config node.Url ConnConfig {=,} node.Pwd
the pool, _: = NewConnPool (node.InitActive, config)
clusterPool [node.Url] = the pool
}
cluster.config = & ClusterConfig
cluster.clusterPool = clusterPool
// initialize node health check thread
FUNC the defer () {
Go cluster.heartBeat ()
} ()
IF == nil {m
m = new new (sync.RWMutex)
}
return & Cluster
}
node heartbeat

After the cluster created, enables asynchronous thread polling the timing of each node, the node issuing the ping request, if no response Pong, said abnormal current node, then the current node exit connection pool, and the queue of the failed node is added, the timing of the polling queue to detect whether the connection is restored, if the recovery is to re-create the connection pool to exit from the current node failure queue.

/ **
* heartbeat connection pool, the timing of each node ping, ping failed, exits from the connection pool, and the queue node fails added
* timing of polling queue node fails, the node detects whether connection has been restored, if the resume is recreated connection pool, and removed from the queue fails
* /
FUNC (the Cluster Cluster *) Heartbeat () {
clusterPool: = cluster.GetClusterPool ()
interval the: = cluster.config.HeartBeatInterval
IF interval the <= 0 {
interval the defaultHeartBeatInterval =
}
var Nodes the make = (Map [String] * the Node)

for I: = 0; I <len (cluster.GetClusterNodesInfo ()); I ++ {
Node: = cluster.GetClusterNodesInfo () [I]
Nodes [node.Url] = Node
}

var = the make failNodes (Map [String] * the Node)
for {
for URL, the pool: = Range clusterPool {
Result, ERR: = executePing (the pool)
IF ERR = nil {!
log.Printf ( "node [% s] health check exception because [% s], the node will be removed \ n-", URL, ERR)
// lock
m.lock ()
time.sleep (time.Duration ( . 5) * time.Second)
failNodes [URL] = nodes [URL]
Delete (clusterPool, URL)
m.Unlock ()
} {the else
log.Printf ( "node [% s] health examination [% s] \ n" , URL, Result)
}
}
// restoration detection
recover (failNodes, clusterPool)

time.sleep (time.Duration (interval The) * time.Second)
}
}

/ **
* fail detecting whether the node has returned to normal
* /
FUNC recover ( Map failNodes [String] * the node, clusterPool Map [String] * ConnPool) {
for URL, node: Range = {failNodes
Conn: = connect (URL)
! = nil Conn IF {
// node reconnection, the connection is restored
config = {ConnConfig URL var, node.Pwd}
the pool, _: = NewConnPool (node.InitActive, config)
// lock
m.lock ()
clusterPool [node.Url] = the pool
Delete (failNodes, URL)
m.Unlock ()
log.Printf ( "node [% s] has reconnection \ n-", URL)
}
}
}
 test results:

 

 

loadbalance currently implements only the random mode, random selection of a communication node prior to each visit

FUNC (the Cluster Cluster *) RandomSelect () * {ConnPool
m.RLock ()
the defer m.RUnlock ()
Pools: cluster.GetClusterPool = ()
for _, the pool: Pools Range = {
! = nil {IF the pool
return the pool
}
}
fmt.Errorf ( "CAN BE none Used the pool")
return nil
}
general flow communication module are as follows:

1, cluster node randomly select a healthy, accessible;

2, if the service node returns the data communication end;

3. If the condition "-MOVED" node returns the message protocol, e.g. -MOVED 5678 127.0.0.1, it indicates that the data is not in the current node;

4, redirected to the access node specified redis;

func (cluster *Cluster) Set(key string, value string) (interface{}, error) {
result, err := executeSet(cluster.RandomSelect(), key, value)
if err.Error() != protocol.MOVED {
return result, err
}

//重定向到新的节点
return executeSet(cluster.SelectOne(result.(string)), key, value)
}

func executeSet(pool *ConnPool, key string, value string) (interface{}, error) {
conn, err := GetConn(pool)
if err != nil {
return nil, fmt.Errorf("get conn fail")
}
defer pool.PutConn(conn)
result := SendCommand(conn, protocol.SET, protocol.SafeEncode(key), protocol.SafeEncode(value))
return handler.HandleReply(result)
}
Thus, for the application layer is concerned, regardless of which node access, can get the final result, it is relatively transparent.

Test call:

package main

import (
. "client"
"net"
"fmt"
)

func main() {
var node7000 = Node{"127.0.0.1:7000", "123456", 10}
var node7001 = Node{"127.0.0.1:7001", "123456", 10}
var node7002 = Node{"127.0.0.1:7002", "123456", 10}
var node7003 = Node{"127.0.0.1:7003", "123456", 10}
var node7004 = Node{"127.0.0.1:7004", "123456", 10}
var node7005 = Node{"127.0.0.1:7005", "123456", 10}

nodes := []*Node{&node7000, &node7001, &node7002, &node7003, &node7004, &node7005}
var clusterConfig = ClusterConfig{nodes,10}
cluster := NewCluster(clusterConfig)
value,err:=cluster.Get("name")
fmt.Println(value, err)
}
 响应结果:

 

Loadbalance heartbeat checks and other follow-up mechanisms to achieve supplement.

project address:

https://github.com/zhangxiaomin1993/gedis

Guess you like

Origin www.cnblogs.com/dqh123/p/12051297.html