ZooKeeper——安装部署

ZooKeeper可单机安装,也可集群安装,单机安装,只要在zoo.cfg中设置一个服务器就可以起了,其他的都和集群安装相同。

一. 安装准备

1.资料

zookeeper官网首页: https://zookeeper.apache.org/
zookeeper下载地址: http://mirror.bit.edu.cn/apache/zookeeper/
阿里、163、华为开源镜像站点有各版本的zookeeper

2.集群架构

在master、node1和node2三个节点上部署zookeeper,三个节点都已安装jdk。

[root@master ~]#cat /etc/hosts
192.168.1.71 master
192.168.1.72 node1
192.168.1.72 node2
IP hostname host role host software
192.168.1.71 master leader zookeeper、jdk
192.168.1.72 node1 follower zookeeper、jdk
192.168.1.73 node2 follower zookeeper、jdk

二、安装部署

1.解压

解压zookeeper安装包到/usr/local/下

[root@master ~]# tar -zxvf zookeeper-3.4.12.tar.gz -C /usr/local/

2.创建zookeeper数据存放位置

[root@master ~]#  mkdir  /usr/local/zookeeper-3.4.12/data

3.拷贝配置文件zoo_sample.cfg

拷贝/usr/local/zookeeper-3.4.12/conf/下zoo_sample.cfg为zoo.cfg

[root@master ~]# cd /usr/local/zookeeper-3.4.12/conf
[root@master conf]# cp zoo_sample.cfg zoo.cfg

4.修改zoo.cfg文件

dataDir=/usr/local/zookeeper-3.4.12/data
dataLogDir=/usr/local/zookeeper-3.4.12/logs
server.1=master:2888:3888
server.2=node1:2888:3888
server.3=node2:2888:3888

5.master主机上新建id文件:

[root@master conf]#echo 1 > /usr/local/zookeeper-3.4.12/data/myid

6.拷贝配置好的zookeeper到主机node1、node2上

[root@master conf]# scp -r zookeeper-3.4.12/ root@node1:/usr/local/
[root@master conf]# scp -r zookeeper-3.4.12/ root@node2:/usr/local/

7.修改主机node1、node2上的myid文件

[root@node1 ~]# echo 2 > /usr/local/zookeeper-3.4.12/data/myid
[root@node2 ~]# echo 3 > /usr/local/zookeeper-3.4.12/data/myid

8.配置环境变量

[root@master zookeeper-3.4.12]$ sudo vi /etc/profile 
export ZOOKEEPER_HOME=/usr/local/zookeeper-3.4.12
export PATH=$PATH:$ZOOKEEPER_HOME/bin

9.解决找不到JAVA_HOME的问题

[root@node1 ~]# vim /usr/local/zookeeper/zookeeper-3.5.6/bin/zkServer.sh 
JAVA_HOME=/usr/local/jdk1.8
[root@master conf]# ansible node -m shell -a '/usr/local/zookeeper/zookeeper-3.5.6/bin/zkServer.sh start'
192.168.1.73 | FAILED | rc=1 >>
Error: JAVA_HOME is not set and java could not be found in PATH.non-zero return code

192.168.1.74 | FAILED | rc=1 >>
Error: JAVA_HOME is not set and java could not be found in PATH.non-zero return code

192.168.1.72 | CHANGED | rc=0 >>
Starting zookeeper ... STARTEDZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/zookeeper-3.5.6/bin/../conf/zoo.cfg

三 验证安装

1.分别启动zookeeper

[root@master zookeeper-3.4.12]# zkServer.sh start
[root@node1 zookeeper-3.4.12]# zkServer.sh start
[root@node2 zookeeper-3.4.12]# zkServer.sh start

自动化启动

[root@master conf]# ansible node -m shell -a '/usr/local/zookeeper/zookeeper-3.5.6/bin/zkServer.sh start'

2.查看状态

[root@master zookeeper-3.4.12]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/module/zookeeper-3.4.12/bin/../conf/zoo.cfg
Mode: **leader**
[root@node1 zookeeper-3.4.12]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/module/zookeeper-3.4.12/bin/../conf/zoo.cfg
Mode: **follower**
[root@node2 zookeeper-3.4.12]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/module/zookeeper-3.4.12/bin/../conf/zoo.cfg
Mode: **follower**

自动化查看

[root@master conf]# ansible node -m shell -a '/usr/local/zookeeper/zookeeper-3.5.6/bin/zkServer.sh status'
192.168.1.72 | CHANGED | rc=0 >>
Client port found: 2181. Client address: localhost.
Mode: followerZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/zookeeper-3.5.6/bin/../conf/zoo.cfg

192.168.1.73 | CHANGED | rc=0 >>
Client port found: 2181. Client address: localhost.
Mode: followerZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/zookeeper-3.5.6/bin/../conf/zoo.cfg

192.168.1.74 | CHANGED | rc=0 >>
Client port found: 2181. Client address: localhost.
Mode: leaderZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/zookeeper-3.5.6/bin/../conf/zoo.cfg

3.停止zookeeper

[root@master zookeeper-3.4.12]$ zkServer.sh stop
[root@master conf]# ansible node -m shell -a '/usr/local/zookeeper/zookeeper-3.5.6/bin/zkServer.sh stop'

4. 指定配置文件启动zookeeper实例

[root@master zookeeper-3.4.12]$ zkServer.sh start conf/zoo-1.cfg

6. 查看指定配置文件的zookeeper实例状态

[root@master zookeeper-3.4.12]$ zkServer.sh status conf/zoo-1.cfg

四、配置参数(zoo.cfg)

1.Server.A=B:C:D

A:服务器序号;myid文件数据与A相同,Zookeeper启动时读取此文件,拿到里面的数据与zoo.cfg里面的配置信息比较从而判断到底是哪个server。
B:主机ip地址;
C:正常端口,服务器与集群中的Leader服务器交换信息的端口;
D:选举端口,Leader服务器宕机了,需要一个端口来重新进行选举,选出一个新的Leader,D这个端口就是用来执行选举时服务器相互通信的端口。

2.tickTime:通信心跳数

Zookeeper服务器心跳时间,单位毫秒,Zookeeper服务器之间或客户端与服务器之间维持心跳的时间间隔,也就是每个tickTime时间就会发送一个心跳。最小的session超时时间为两倍心跳时间。(session的最小超时时间是2*tickTime),tickTime默认值2000。

3.initLimit:LF(leader&follower)初始数据时限

集群中的follower跟随者服务器(F)与leader领导者服务器(L)之间初始连接时能容忍的最多心跳数(tickTime的数量),用它来限定集群中的Zookeeper服务器连接到Leader的时限。投票选举新leader的初始化时间,Follower在启动过程中,会从Leader同步所有最新数据,然后确定自己能够对外服务的起始状态。Leader允许F在initLimit时间内完成这个工作。initLimit默认值10。

4.syncLimit:LF同步通信最大频次

集群中Leader与Follower之间的最大响应时间单位,假如响应超过syncLimit * tickTime,
Leader认为Follwer死掉,从服务器列表中删除Follwer。在运行过程中,Leader负责与ZK集群中所有机器进行通信,例如通过一些心跳检测机制,来检测机器的存活状态。如果L发出心跳包在syncLimit次之后,还没有从F那收到响应,那么就认为这个F已经不在线了。syncLimit默认值5。

5.dataDir:数据文件目录+数据持久化路径

保存内存数据库快照信息的位置,如果没有其他说明,更新的事务日志也保存到数据库。

6.clientPort:客户端连接端口,clientPort默认值2181。

五. 客户端命令行操作

1.启动客户端

[root@master zookeeper-3.4.12]$ zkCli.sh

2.帮助命令,显示所有操作

[zk: localhost:2181(CONNECTED) 0] help
ZooKeeper -server host:port cmd args
    stat path [watch]
    set path data [version]
    ls path [watch]
    delquota [-n|-b] path
    ls2 path [watch]
    setAcl path acl
    setquota -n|-b val path
    history 
    redo cmdno
    printwatches on|off
    delete path [version]
    sync path
    listquota path
    rmr path
    get path [watch]
    create [-s] [-e] path data acl
    addauth scheme auth
    quit 
    getAcl path
    close 
    connect host:port

3.查看当前znode中所包含的内容

[zk: localhost:2181(CONNECTED) 1] ls /
[zookeeper]

4.查看当前节点数据并能看到更新次数等数据

[zk: localhost:2181(CONNECTED) 2] ls2 /
[zookeeper]
cZxid = 0x0
ctime = Wed Dec 31 19:00:00 EST 1969
mZxid = 0x0
mtime = Wed Dec 31 19:00:00 EST 1969
pZxid = 0x0
cversion = -1
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 0
numChildren = 1

5.创建普通节点

[zk: localhost:2181(CONNECTED) 3] create /opt "aa"
Created /opt
[zk: localhost:2181(CONNECTED) 4] create /opt/module "bb"
Created /opt/module

6.获得节点的值

[zk: localhost:2181(CONNECTED) 5] get /opt
aa
cZxid = 0x4100000004
ctime = Wed Jul 25 07:48:55 EDT 2018
mZxid = 0x4100000004
mtime = Wed Jul 25 07:48:55 EDT 2018
pZxid = 0x4100000005
cversion = 1
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 2
numChildren = 1
[zk: localhost:2181(CONNECTED) 6] get /opt/module
bb
cZxid = 0x4100000005
ctime = Wed Jul 25 07:51:21 EDT 2018
mZxid = 0x4100000005
mtime = Wed Jul 25 07:51:21 EDT 2018
pZxid = 0x4100000005
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 2
numChildren = 0

7.创建短暂节点

[zk: localhost:2181(CONNECTED) 7] create -e /app 8888           
Created /app

在当前客户端是能查看到的

[zk: localhost:2181(CONNECTED) 8] ls /
[app, opt, zookeeper]

退出当前客户端然后再重启启动客户端

[zk: localhost:2181(CONNECTED) 9] quit
[root@master zookeeper-3.4.12]$ bin/zkCli.sh

再次查看根目录下短暂节点已经删除

[zk: localhost:2181(CONNECTED) 0] ls /
[opt, zookeeper]

8.创建带序号的节点

先创建一个普通的根节点app

[zk: localhost:2181(CONNECTED) 1] create /app "app"
create /app "app"

创建带序号的节点

[zk: localhost:2181(CONNECTED) 2] create -s /app/aa 888
Created /app/aa0000000000
[zk: localhost:2181(CONNECTED) 3] create -s /app/bb 888
Created /app/bb0000000001
[zk: localhost:2181(CONNECTED) 4] create -s /app/cc 888
Created /app/cc0000000002
如果原节点下有1个节点,则再排序时从1开始,以此类推。
[zk: localhost:2181(CONNECTED) 5] create -s /opt/aa 888
Created /opt/aa0000000001

9.修改节点数据值

[zk: localhost:2181(CONNECTED) 6] set /opt 999
cZxid = 0x4100000004
ctime = Wed Jul 25 07:48:55 EDT 2018
mZxid = 0x410000000e
mtime = Wed Jul 25 08:14:18 EDT 2018
pZxid = 0x410000000d
cversion = 2
dataVersion = 1
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 3
numChildren = 2

10.节点的值变化监听

在node1主机上注册监听/opt节点数据变化

[root@node1 ~]$ zkCli.sh 
[zk: localhost:2181(CONNECTED) 0] get /opt watch

在master主机上修改/opt节点的数据

[zk: localhost:2181(CONNECTED) 7] set /opt 777

[

zk: localhost:2181(CONNECTED) 20] set /opt 'dddd'

WATCHER::

WatchedEvent state:SyncConnected type:NodeDataChanged path:/opt
[zk: localhost:2181(CONNECTED) 21] set /opt "dddd"

观察node1主机收到数据变化的监听

[zk: localhost:2181(CONNECTED) 1] 
WATCHER::
WatchedEvent state:SyncConnected type:NodeDataChanged path:/opt

11.节点的子节点变化监听(路径变化)

在node1主机上注册监听/opt节点的子节点变化

[zk: localhost:2181(CONNECTED) 2] ls /opt watch
[aa0000000001, module]
[zk: localhost:2181(CONNECTED) 17] get /opt watch
'get path [watch]' has been deprecated. Please use 'get [-s] [-w] path' instead.
opt
[zk: localhost:2181(CONNECTED) 18] get -s /opt
opt
cZxid = 0x100000002
ctime = Mon Feb 03 11:47:57 CST 2020
mZxid = 0x100000002
mtime = Mon Feb 03 11:47:57 CST 2020
pZxid = 0x100000003
cversion = 1
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 3
numChildren = 1
[zk: localhost:2181(CONNECTED) 19] get -s -w /opt
opt
cZxid = 0x100000002
ctime = Mon Feb 03 11:47:57 CST 2020
mZxid = 0x100000002
mtime = Mon Feb 03 11:47:57 CST 2020
pZxid = 0x100000003
cversion = 1
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 3
numChildren = 1

在master主机/opt节点上创建子节点

[zk: localhost:2181(CONNECTED) 8] create /opt/bb 666
Created /opt/bb

观察node1主机收到子节点变化的监听

[zk: localhost:2181(CONNECTED) 3] 
WATCHER::
WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/opt

12.删除节点

[zk: localhost:2181(CONNECTED) 9] delete /opt/bb

13.递归删除节点

[zk: localhost:2181(CONNECTED) 10] rmr /opt

不能用delete删除非空节点

[zk: localhost:2181(CONNECTED) 10] delete /opt/temp/1/12 
[zk: localhost:2181(CONNECTED) 11] delete /opt/temp 
Node not empty: /opt/temp

14.查看节点状态

[zk: localhost:2181(CONNECTED) 14] stat /app
cZxid = 0x4100000009
ctime = Wed Jul 25 08:09:56 EDT 2018
mZxid = 0x4100000009
mtime = Wed Jul 25 08:09:56 EDT 2018
pZxid = 0x410000000c
cversion = 3
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 3
numChildren = 3

15.退出客户端

[zk: localhost:2181(CONNECTED) 17] quit 

————Blueicex 2020/2/2 18:20 [email protected]

发布了55 篇原创文章 · 获赞 0 · 访问量 2008

猜你喜欢

转载自blog.csdn.net/blueicex2017/article/details/104131781