zookeeper single Cluster environment to build

Zookeeper many problems faced by distributed systems, such as distributed lock, unified naming service, distribution center, manage the cluster Leader of elections

Preparing the Environment

Distributed among the nodes of the communication system, the Zookeeper this process to ensure that the data is unique, safe and reliable

Download the official website

  • Modify the configuration file

The /conf/zoo_sample.cfg modified to zoo.cfg

Profile Reading

# zookeeper  服务器和客户端之间维持心跳的时间间隔,即每个ticktime发送一个心跳包,单位是毫秒
# zookeeper 中session过期的时间是 ticktime*2
tickTime=2000
# Leader 允许Follower在initLimit时间内完成从Leader身上同步全部数据的工作, 随机集群的不断扩大,Follower从Leader上同步数据的时间就会变成,此时有必要,默认是0
initLimit=10
# Leader会和集群中的其他机器进行通信,在syncLimit时间内,都没有从Follower上获取返回数据,就认为这个节点挂了
syncLimit=5
# 存储快照文件的目录,默认情况下事务日志也在这里了,下面单独配置,因为因为日志的写性能影响zookeeper的性能
dataDir=E:\\zookeeper\\zookeeper-3.4.14\\data

dataLogDir=E:\\zookeeper\\zookeeper-3.4.14\\log
# the port at which the clients will connect
# 客户端连接的端口
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
  • start up

Script starts in the / bin / directory
linux boot ./zkCli.sh -server localhost:2181
****
started successfully into the client console

 # 默认的节点叫zookeeper
[zk: localhost:2181(CONNECTED) 1] ls /
[zookeeper]
# 创建一个节点
[zk: localhost:2181(CONNECTED) 11] create /changwu1 "num1" 
Created /changwu1 
 # 重新查看
[zk: localhost:2181(CONNECTED) 14] ls /
[zookeeper, changwu1]
 # 获取节点的内容
[zk: localhost:2181(CONNECTED) 17] get /changwu1
num1
cZxid = 0x2
ctime = Mon Sep 16 15:56:27 CST 2019
mZxid = 0x2
mtime = Mon Sep 16 15:56:27 CST 2019
pZxid = 0x2
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 4
numChildren = 0

# 退出
quit

# 删除一个节点
[zk: localhost:2181(CONNECTED) 32] delete /changwu1
[zk: localhost:2181(CONNECTED) 33] ls /
[zookeeper]

# 递归删除节点
rmr /path1/path2
这个path1 和 path2 其实是两个节点
# 修改节点数据
set /path "value"

# 节点的状态
[zk: localhost:2181(CONNECTED) 50] stat /z1
cZxid = 0x5
ctime = Mon Sep 16 16:04:35 CST 2019
mZxid = 0x7
mtime = Mon Sep 16 16:06:31 CST 2019
pZxid = 0x6
cversion = 1
dataVersion = 1
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 8
numChildren = 1

# 创建永久有序节点
create -s /path

Create a form node, and we mkdir, create a directory structure similar


Cluster Setup

  • The zoo.cfg three copies, and modify the configuration file

The second port 2887, to synchronize data between Leader and Follower, third ports used to elect a new Leader

  • Create a directory in six tmp directory is zoo_data_1-3 zoo_logs_1-3 respectively
  • Create a file myid
[root@139 tmp]# echo 1 > zoo_data_1/myid
[root@139 tmp]# echo 2 > zoo_data_2/myid
[root@139 tmp]# echo 3 > zoo_data_3/myid

Start the cluster server

[root@139 bin]# ./zkServer.sh start ../conf/zoo1.cfg 
ZooKeeper JMX enabled by default
Using config: ../conf/zoo1.cfg
Starting zookeeper ... STARTED
[root@139 bin]# ./zkServer.sh start ../conf/zoo2.cfg 
ZooKeeper JMX enabled by default
Using config: ../conf/zoo2.cfg
Starting zookeeper ... STARTED
[root@139 bin]# ./zkServer.sh start ../conf/zoo3.cfg 
ZooKeeper JMX enabled by default
Using config: ../conf/zoo3.cfg
Starting zookeeper ... STARTED

Check the status of each node, respectively,

[root@139 bin]# ./zkServer.sh status ../conf/zoo3.cfg 
ZooKeeper JMX enabled by default
Using config: ../conf/zoo3.cfg
Mode: follower

[root@139 bin]# ./zkServer.sh status ../conf/zoo1.cfg 
ZooKeeper JMX enabled by default
Using config: ../conf/zoo1.cfg
Mode: leader

[root@139 bin]# ./zkServer.sh status ../conf/zoo2.cfg 
ZooKeeper JMX enabled by default
Using config: ../conf/zoo2.cfg
Mode: follower
    

Connection Client

./zkCli -server localhost:服务端的端口号

zkCli.sh -server localhost:2181
zkCli.sh -server localhost:2182
zkCli.sh -server localhost:2183

Adding observer

  1. And, like the first three, create a directory Sentinel used in the tmp directory is zoo_data_4 zoo_logs_4 respectively
  2. It myid create files in the directory zoo_data_4 write 4
  3. Change the first three node configuration file
tickTime=2000
initLimit=10
syncLimit=5

dataDir=/tmp/zoo_data_1
dataLogDir=/tmp/zoo_logs_1

clientPort=2181

# 第一个端用于Leader和Leanner之间同步, 第二个端口,用户选举过程中的投票通信
server.1=localhost:2887:3887
server.2=localhost:2888:3888
server.3=localhost:2889:3889
server.4=localhost:2890:3890:observer
  1. Add the viewer profile
tickTime=2000
initLimit=10
syncLimit=5

dataDir=/tmp/zoo_data_4
dataLogDir=/tmp/zoo_logs_4

# 观察者的配置
peerType=observer

clientPort=2184

# 第一个端用于Leader和Leanner之间同步, 第二个端口,用户选举过程中的投票通信
server.1=localhost:2887:3887
server.2=localhost:2888:3888
server.3=localhost:2889:3889
server.4=localhost:2890:3890:observer

Cluster roles

Leader

Poll and resolutions, final status has been updated

Follower

Receiving a client request processing to participate in Leader-sponsored resolution

Observer

Accept client connections, forwards the request to the Leader of bytes, but it does not participate in the vote, but only sync Leader of the state, it's a way to expand the zookeeper

? Why does this add Observer and zookeeper work is closely related to:

A plurality zookeeper Server cluster, each a Server to process the request may be a plurality of client, and if it is a read request, the local database with the current sub-Server directly corresponding to the present, however, if the request is a write request to change the state zookeeper , it becomes troublesome, zookeeper's leader node will poll this mechanism is zab agreement , the consent of more than half of the nodes, will put this operation is loaded into memory, and the client replies

In this process, zookeeper served two functions, on the one hand to accept the connection client, on the other hand they have to initiate a vote on the resolution, these two functions to limit the expansion zookeeper would like to support more client connections, you have to add a server but more and more server every time a poll becomes heavy, so the Observer came into being

Observer, will not participate in the vote, while the other nodes in the voting stage, Observer receiving a client connection, the connection forwarding leader, but it will also receive the results of the voting process, thus greatly improving the throughput of the system

Leaner

Leader and together collectively node synchronization state, Observer and collectively Leaner Follower

Zookeeper's CPA

CP: When a cluster node only remaining Leader Follower, Leader hung up, you have to re-election, the election process in the system is unusable

AP: Leader Follower Observer, on the composition of the three groups, to achieve AP, hung up when the Leader, the same election, but the Observer can continue to accept the request of the client, but the data Observer might not be the latest data

Guess you like

Origin www.cnblogs.com/ZhuChangwu/p/11529117.html