zookeeper (single, pseudo-cluster, cluster) deployment

ZooKeeper is a distributed, open-source coordination service for distributed applications, application configuration management can be implemented in a distributed environment, unified naming service, state synchronization services and other functions.
ZooKeeper is an application designed for distributed high-availability, high-performance open-source coordination service, which provides a basic service: distributed lock service. Since ZooKeeper open characteristics on the basis of its implementation on the distributed lock, has been worked out other functions, such as: configuration maintenance, service groups, and the like distributed message queue. ZooKeeper maintains a data structure similar to a file system that is inside each subdirectory called znode (directory node), and file systems, we are free CRUD znode. ZooKeeper cluster suitable to build on the odd machines. As long as more than half of the cluster host is alive, then the service is available. ZooKeeper is not specified in the configuration file, master and slave, however, ZooKeeper at work, only one node as a leader, the remaining nodes follower, leader election mechanism is through internal temporarily produced.

ZooKeeper Features
1, sequential consistency: to zxid to assure an orderly transaction.
2, Atomicity: zab to guarantee an atomic operation, either succeed or fail.
3, single view: customers access to data is always consistent.
4, reliable: the version implements "write the check" to ensure the accuracy of the data written.

There are three methods of installation ZooKeeper: dummy stand-alone mode & cluster Cluster & Mode Mode
Single Mode: ZooKeeper server running on a single instance in the form of, for testing environment.
Pseudo trunked mode: Run ZooKeeper multiple instances on a single server.
Cluster mode: ZooKeeper runs on multiple servers, for a production environment.
Required packages (extraction code: mqtp)

First, begin deployment

1, stand-alone installation Zookeeper

#安装JDK环境
[root@zookeeper ~]# tar zxf jdk-8u211-linux-x64.tar.gz -C /usr/local/
[root@zookeeper ~]# vim /etc/profile            # 编辑Java变量
..........................
export JAVA_HOME=/usr/local/jdk1.8.0_211
export JRE_HOME=/usr/local/jdk1.8.0_211/jre
export CLASSPATH=$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
[root@zookeeper ~]# source /etc/profile           # 执行使配置生效
[root@zookeeper ~]# java -version                # 查看是否安装成功
java version "1.8.0_211"
Java(TM) SE Runtime Environment (build 1.8.0_211-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.211-b12, mixed mode)
#安装zookeeper
[root@zookeeper ~]# tar zxf zookeeper-3.4.14.tar.gz -C /usr/local/
[root@zookeeper ~]# vim /etc/profile
export ZOOKEEPER_HOME=/usr/local/zookeeper-3.4.14
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$ZOOKEEPER_HOME/bin:$PATH                          # 加入Java配置的PATH中
[root@zookeeper ~]# source /etc/profile
[root@zookeeper ~]# cd /usr/local/zookeeper-3.4.14/conf/
[root@zookeeper conf]# cp zoo_sample.cfg zoo.cfg
[root@zookeeper conf]# mkdir -p /usr/local/zookeeper-3.4.14/data                           # 创建数据目录
[root@zookeeper conf]# sed -i "s/dataDir=\/tmp\/zookeeper/dataDir=\/usr\/local\/zookeeper-3.4.14\/data/g" zoo.cfg 
[root@zookeeper conf]# zkServer.sh start             # 启动服务
[root@zookeeper conf]# netstat -anput | grep 2181           # 确定端口在监听
tcp6       0      0 :::2181                 :::*                    LISTEN      4903/java   

1) The client operation command

[root@zookeeper ~]# zkCli.sh                # 后面不加任何参数默认连接localhost本机的2181端口
Connecting to localhost:2181
[zk: localhost:2181(CONNECTED) 0] help         # 显示客户端支持的命令
[zk: localhost:2181(CONNECTED) 1] ls /          # 查看当前zk中所包含的内容
[zookeeper]
[zk: localhost:2181(CONNECTED) 2] ls2 /        # 查看当前zk中的内容及详情
[zookeeper]
cZxid = 0x0
ctime = Thu Jan 01 08:00:00 CST 1970
mZxid = 0x0
mtime = Thu Jan 01 08:00:00 CST 1970
pZxid = 0x0
cversion = -1
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 0
numChildren = 1
[zk: localhost:2181(CONNECTED) 3] create /test1 neirong             # 创建一个节点
Created /test1
[zk: localhost:2181(CONNECTED) 4] ls /          # 可以看到已经多了一个test1
[zookeeper, test1]
[zk: localhost:2181(CONNECTED) 5] get /test1        # huo获取节点信息,需写绝对路径
neirong                            # 节点数据信息 
cZxid = 0x2                      # 节点创建时额zxid 
ctime = Sat Apr 04 16:15:30 CST 2020               # 节点创建的时间 
mZxid = 0x2
mtime = Sat Apr 04 16:15:30 CST 2020                    # 节点最近一次更新的时间 
pZxid = 0x2
cversion = 0                   # 子结点数据更新次数 
dataVersion = 0                 # 本节点数据更新次数 
aclVersion = 0                 # 节点ACL的更新次数 
ephemeralOwner = 0x0
dataLength = 7               # 节点数据长度 
numChildren = 0                # 子结点的数量 
[zk: localhost:2181(CONNECTED) 6] set /test1 "gengxin"         # 更新节点数据
[zk: localhost:2181(CONNECTED) 7] get /test1           # 可以看到已经更改为新的数据
gengxin
[zk: localhost:2181(CONNECTED) 8] history         # 列出zui最近所使用的命令
0 - help
1 - ls /
2 - ls2 /
3 - create /test1 neirong
4 - ls /
5 - get /test1
6 - set /test1 "gengxin"
7 - get /test1
8 - history
[zk: localhost:2181(CONNECTED) 9] delete /test1         # 删除节点,但是无法删除拥有子节点的 节点
[zk: localhost:2181(CONNECTED) 11] rmr /test1           # rmrk可以删除带有子节点的节点

About zoo.cfg configuration parameters refer to the official document

2, zookeeper single cluster deployment dummy
run multiple instances zk on a host, each corresponding to a separate instance zk profile; however, each profile clientPort & dataDir & dataLogDir never be the same, also you need to create a file in dataDir myid to specify zk instance of the corresponding dataDir.
Environment are as follows:
here on a single physical server, deployment zk Example 3
zookeeper (single, pseudo-cluster, cluster) deployment
1) mounted zookeeper

#安装好JDK,可参考之前单机安装
[root@zookeeper ~]# java -version 
java version "1.8.0_211"
Java(TM) SE Runtime Environment (build 1.8.0_211-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.211-b12, mixed mode)
#安装zookeeper
[root@zookeeper ~]# tar zxf zookeeper-3.4.14.tar.gz -C /usr/local/
[root@zookeeper ~]# vim /etc/profile
export ZOOKEEPER_HOME=/usr/local/zookeeper-3.4.14
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$ZOOKEEPER_HOME/bin:$PATH                          
[root@zookeeper ~]# source /etc/profile
#创建数据目录
[root@zookeeper ~]# mkdir -p /usr/local/zookeeper-3.4.14/{data_0,data_1,data_2}
#创建myid文件,并填入ID值
[root@zookeeper ~]# echo 0 > /usr/local/zookeeper-3.4.14/data_0/myid
[root@zookeeper ~]# echo 1 > /usr/local/zookeeper-3.4.14/data_1/myid
[root@zookeeper ~]# echo 2 > /usr/local/zookeeper-3.4.14/data_2/myid
#创建事务日志目录,官方建立尽量给事务日志作单独的磁盘或挂载点,这会极大的提高zk性能
[root@zookeeper ~]# mkdir -p /usr/local/zookeeper-3.4.14/{logs_0,logs_1,logs_2}
#配置server0
[root@zookeeper ~]# cd /usr/local/zookeeper-3.4.14/conf/
[root@zookeeper conf]# cp zoo_sample.cfg zoo_0.cfg
[root@zookeeper conf]# egrep -v "^$|^#" zoo_0.cfg          # 修改配置文件为如下
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper-3.4.14/data_0/
clientPort=2180
dataLogDir=/usr/local/zookeeper-3.4.14/logs_0/
server.0=127.0.0.1:2287:3387 
server.1=127.0.0.1:2288:3388 
server.2=127.0.0.1:2289:3389 
#配置server1
[root@zookeeper conf]# cp zoo_0.cfg zoo_1.cfg           # 复制之前的配置文件,修改个别参数
[root@zookeeper conf]# vim zoo_1.cfg
dataDir=/usr/local/zookeeper-3.4.14/data_1/
clientPort=2181
dataLogDir=/usr/local/zookeeper-3.4.14/logs_1/
#配置server2
[root@zookeeper conf]# cp zoo_0.cfg zoo_2.cfg
[root@zookeeper conf]# vim zoo_2.cfg
dataDir=/usr/local/zookeeper-3.4.14/data_2/
clientPort=2182
dataLogDir=/usr/local/zookeeper-3.4.14/logs_2/
[root@zookeeper conf]# zkServer.sh start zoo_0.cfg             # 我这里是在conf目录下,所以后面直接接着配置文件,如果不在conf下,则需写全路径
#启动各实例
[root@zookeeper conf]# zkServer.sh start zoo_1.cfg 
[root@zookeeper conf]# zkServer.sh start zoo_2.cfg 
[root@zookeeper conf]# netstat -anput | grep java 
tcp6       0      0 :::2180                 :::*                    LISTEN      9251/java           
tcp6       0      0 :::2181                 :::*                    LISTEN      9291/java           
tcp6       0      0 :::2182                 :::*                    LISTEN      9334/java    
#列出JVM
[root@zookeeper conf]# jps
9377 Jps
9251 QuorumPeerMain
9334 QuorumPeerMain
9291 QuorumPeerMain
#各实例都启动之后就可以使用客户端进行连接了
[root@zookeeper conf]# zkCli.sh -server 127.0.0.1:2180     # 例

About multiple server configuration instructions: These server entry forms servers. ZooKeeper servers listed services. When the server starts, to know it by looking for the file in the data directory myid which server it is. This file contains the server number. Finally, note that two port numbers for each server behind the name: "2287" and "3387." A port connected to other peer peer before use. Such a connection is necessary in order to peer communications can, for example, agree on the sequentially updated. More specifically, ZooKeeper server using this port is connected to followers leader. When a new leader emerge, followers use this port to open a TCP connection to the leader. Since the default leader election also use TCP, we currently need another leadership election port. This is the second entry in the server port.

3, ZooKeeper multi-machine cluster deployment
in order to obtain reliable zk services should deploy multiple zk on multiple servers, as long as the majority of the cluster service starts zk, then the total zk service will be available. Build ZooKeeper cluster on multiple host mode, and pseudo-cluster is almost the same.
Environment are as follows:
zookeeper (single, pseudo-cluster, cluster) deployment
all you need to perform the following operations on three servers

#安装好JDK
[root@zookeeper01 ~]# java -version 
java version "1.8.0_211"
Java(TM) SE Runtime Environment (build 1.8.0_211-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.211-b12, mixed mode)
#安装好zookeeper
[root@zookeeper01 ~]# tar zxf zookeeper-3.4.14.tar.gz -C /usr/local/
[root@zookeeper01 ~]# vim /etc/profile
export ZOOKEEPER_HOME=/usr/local/zookeeper-3.4.14
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$ZOOKEEPER_HOME/bin:$PATH                          # 加入Java配置的PATH中
[root@zookeeper01 ~]# source /etc/profile

server0 Configuration

[root@zookeeper01 ~]# mkdir -p /usr/local/zookeeper-3.4.14/{data,logs}
[root@zookeeper01 ~]# echo 0 > /usr/local/zookeeper-3.4.14/data/myid
[root@zookeeper01 ~]# cd /usr/local/zookeeper-3.4.14/conf/
[root@zookeeper01 conf]# cp zoo_sample.cfg zoo.cfg
[root@zookeeper01 conf]# egrep -v "^$|^#" zoo.cfg          # 修改配置文件为如下
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper-3.4.14/data/
clientPort=2181
dataLogDir=/usr/local/zookeeper-3.4.14/logs
server.0=192.168.171.134:2288:3388
server.1=192.168.171.135:2288:3388
server.2=192.168.171.140:2288:3388
[root@zookeeper01 conf]# zkServer.sh start          # 启动实例
[root@zookeeper01 conf]# netstat -anput | grep java     # 确定端口在监听
tcp6       0      0 :::43542                :::*                    LISTEN      40355/java          
tcp6       0      0 192.168.171.134:3388    :::*                    LISTEN      40355/java          
tcp6       0      0 :::2181                 :::*                    LISTEN      40355/java        

server1 configuration

[root@zookeeper02 ~]# mkdir -p /usr/local/zookeeper-3.4.14/{data,logs}
[root@zookeeper02 ~]# echo 1 > /usr/local/zookeeper-3.4.14/data/myid
[root@zookeeper02 ~]# cd /usr/local/zookeeper-3.4.14/conf/
[root@zookeeper02 conf]# scp [email protected]:/usr/local/zookeeper-3.4.14/conf/zoo.cfg ./
[root@zookeeper02 conf]# zkServer.sh start
[root@zookeeper02 conf]# netstat -anput | grep java  
tcp6       0      0 192.168.171.135:3388    :::*                    LISTEN      40608/java          
tcp6       0      0 :::2181                 :::*                    LISTEN      40608/java 

server2 Configuration

[root@zookeeper03 ~]# mkdir -p /usr/local/zookeeper-3.4.14/{data,logs}
[root@zookeeper03 ~]# echo 2 > /usr/local/zookeeper-3.4.14/data/myid
[root@zookeeper03 ~]# cd /usr/local/zookeeper-3.4.14/conf/
[root@zookeeper03 conf]# scp [email protected]:/usr/local/zookeeper-3.4.14/conf/zoo.cfg ./
[root@zookeeper03 conf]# zkServer.sh start
[root@zookeeper03 conf]# netstat -anput | grep java
tcp6       0      0 192.168.171.140:3388    :::*                    LISTEN      12769/java          
tcp6       0      0 :::2181                 :::*                    LISTEN      12769/java   

View the status of each node zk

[root@zookeeper01 /]# zkServer.sh status 
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower
[root@zookeeper02 /]# zkServer.sh status           # 02服务器为leader
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: leader
[root@zookeeper03 /]# zkServer.sh status 
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower

Guess you like

Origin blog.51cto.com/14227204/2484928