zookeeper learning 01

introduction to zookeeper

Zookeeper is understood from the perspective of design patterns. It is a distributed service management framework based on the observer pattern design. It is responsible for storing and managing the data that everyone cares about, and then accepts the registration of observers. Once the state of these data changes, Zookeeper Respond accordingly to those observers who are responsible for notifying those registered with Zookeeper.
Zookeeper = file system + notification mechanism

working mechanism of zookeeper
insert image description here

Install

Three virtual machines, no more details

Client Command Line Operations

start zookeeper

[root@hadoop102 software]# my_zookeeper.sh status
---------- zookeeper hadoop102 状态 ------------
ZooKeeper JMX enabled by default
Using config: /opt/software/zookeeper-3.5.7/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
---------- zookeeper hadoop103 状态 ------------
ZooKeeper JMX enabled by default
Using config: /opt/software/zookeeper-3.5.7/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: leader
---------- zookeeper hadoop104 状态 ------------
ZooKeeper JMX enabled by default
Using config: /opt/software/zookeeper-3.5.7/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
[root@hadoop102 zookeeper-3.5.7]# ./bin/zkCli.sh 

View the content contained in the current znode

[zk: localhost:2181(CONNECTED) 1] ls /
[hbase, zookeeper]

View the detailed data of the current node

[zk: localhost:2181(CONNECTED) 2] ls -s /
[hbase, zookeeper]cZxid = 0x0
ctime = Thu Jan 01 08:00:00 CST 1970
mZxid = 0x0
mtime = Thu Jan 01 08:00:00 CST 1970
pZxid = 0x200000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 0
numChildren = 2

Create 2 normal nodes respectively

[zk: localhost:2181(CONNECTED) 3] create /sanguo "caocao"
Created /sanguo
[zk: localhost:2181(CONNECTED) 4] create /sanguo/weiguo "dianwei"
Created /sanguo/weiguo

get the value of the node

[zk: localhost:2181(CONNECTED) 5] get /sanguo
caocao
[zk: localhost:2181(CONNECTED) 6] get -s /sanguo
caocao
cZxid = 0x600000002
ctime = Mon Apr 11 15:32:05 CST 2022
mZxid = 0x600000002
mtime = Mon Apr 11 15:32:05 CST 2022
pZxid = 0x600000003
cversion = 1
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 6
numChildren = 1
[zk: localhost:2181(CONNECTED) 7] get -s /sanguo/weiguo
dianwei
cZxid = 0x600000003
ctime = Mon Apr 11 15:32:53 CST 2022
mZxid = 0x600000003
mtime = Mon Apr 11 15:32:53 CST 2022
pZxid = 0x600000003
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 7
numChildren = 0

Create temporary nodes

[zk: localhost:2181(CONNECTED) 8] create -e /sanguo/wuguo "zhouyu"
Created /sanguo/wuguo

(1) It can be viewed on the current client

[zk: localhost:2181(CONNECTED) 9] ls /sanguo
[weiguo, wuguo]

(2) Exit the current client and restart the client

quit
./bin/zkCli.sh 

(3) Check again that the ephemeral nodes in the root directory have been deleted

[zk: localhost:2181(CONNECTED) 0] ls /sanguo
[weiguo]

Create a node with a serial number
(1) First create an ordinary root node

[zk: localhost:2181(CONNECTED) 1] create /sanguo/shuguo "liubei"
Created /sanguo/shuguo

(2) Create a node with a serial number

[zk: localhost:2181(CONNECTED) 2] create /sanguo/shuguo "liubei"
Node already exists: /sanguo/shuguo
[zk: localhost:2181(CONNECTED) 3] create -s /sanguo/shuguo "liubei"
Created /sanguo/shuguo0000000003
[zk: localhost:2181(CONNECTED) 4] create -s /sanguo/shuguo "liubei"
Created /sanguo/shuguo0000000004
[zk: localhost:2181(CONNECTED) 5] ls /sanguo
[shuguo, shuguo0000000003, shuguo0000000004, weiguo]

If there is no child node under the node, the serial number will increase sequentially from 0. If there are already 2 nodes under the original node, start from 2 when reordering, and so on.

Modify node data value

[zk: localhost:2181(CONNECTED) 6] set /sanguo/weiguo "caopi"

Node value change monitoring
(1) Register on the hadoop104 host to monitor/sanguo node data changes

[zk: localhost:2181(CONNECTED) 0] get -w /sanguo
caocao

(2) Modify the data of the /sanguo node on the hadoop103 host

[zk: localhost:2181(CONNECTED) 0] set /sanguo "diaochan"

(3) Observing the monitoring of data changes received by the hadoop104 host
insert image description here
Node change monitoring of child nodes (path changes)
(1) Register on the hadoop104 host to monitor changes in child nodes of /sanguo nodes

[zk: localhost:2181(CONNECTED) 1] ls -w /sanguo
[shuguo, shuguo0000000003, shuguo0000000004, weiguo]

(2) Create a child node on the hadoop103 host/sanguo node

[zk: localhost:2181(CONNECTED) 1] create /sanguo/win "simayi"
Created /sanguo/win

(3) Observe that the hadoop104 host receives the monitoring of child node changes
insert image description here
and deletes nodes

[zk: localhost:2181(CONNECTED) 7] delete /sanguo/win

delete node recursively

[zk: localhost:2181(CONNECTED) 8] deleteall /sanguo/shuguo

View node status

[zk: localhost:2181(CONNECTED) 11] stat /sanguo
cZxid = 0x600000002
ctime = Mon Apr 11 15:32:05 CST 2022
mZxid = 0x60000000e
mtime = Mon Apr 11 15:48:00 CST 2022
pZxid = 0x600000011
cversion = 9
dataVersion = 1
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 8
numChildren = 3

The next article introduces the API application of zookeeper

Guess you like

Origin blog.csdn.net/weixin_46322367/article/details/124100178