Article Directory
"4.4.1 Introduction to ZooKeeper"
-
12 10 Application case of zookeeper:
-
13 15 zk's similar products:
consul etcd (lightweight than zk) Doozer -
18 points stand-alone version installation reference and related official documents reference point:
start the server:bin/zkServer.sh start
connect with the client:bin/zkCli.sh -server 127.0.0.1:2181
-
24 45+ cli operation guide:
-
30 57+ Java api
-
36 35 Third-party client: zkClient, Curator
"4.4.2 ZooKeeper Core Concept"
-
session:
-
Data model:
-
- 6 5+ znode naming convention
-
- 8 points znode node type:
- 8 points znode node type:
-
- 14 5 znode data structure:
- 14 5 znode data structure:
-
- 23 40 acl
-
- 27 50 Time in zookeeper:
- 27 50 Time in zookeeper:
-
32 55+ watch monitoring mechanism
-
- 51 25 The subscription method in ZkClient can continuously monitor
Features of 56 35+ zk:
"4.4.3 ZooKeeper Typical Application Scenarios"
-
16 points naming service:
-
16 50 Master election:
Method 1: Competition to create a temporary node master, only one succeeds, all of them are watched
Method 2: The smallest node method: The order of the temporary order of the servers node is determined by the order of the child nodes -
30 42+ Distributed queues:
-
34 55+ Distributed lock:
《4.4.4 ZooKeeper cluster》
- Reference video and https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_zkMultiServerSetup , I deployed three zk on a machine to form a cluster, the specific deployment is as follows:
- conf/zoo1.cfg is as follows. zoo2.cfg, zoo3.cfg are similar
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/var/lib/zk1
clientPort=2181
server.1=localhost:2881:3881
server.2=localhost:2882:3882
server.3=localhost:2883:3883
-
Create a myid file under /var/lib/zk1, the content is 1.2 and 3 are similar. If the zk file is created by the zk user, and this folder belongs to root, you should also perform a series of operations to change the user and permissions:
sudo chown -R zk:zk /var/lib/zk1
chmod -R 775 /var/ lib/zk1 -
Start one by one:
java -cp zookeeper-3.4.10.jar:lib/*:conf org.apache.zookeeper.server.quorum.QuorumPeerMain conf/zoo1.cfg
-
22 43 Monitoring command
-
26 points + ZAB protocol:
zk is not available for leader election
"4.4.5 Detailed Distributed Consistency Protocol"
-
Undo log is required for rollback, redo log is required for submission
-
15 Submitted in two stages (2PC) as shown below. Me: From the point of view of the protocol steps, 2PC is similar to the ZAB protocol. The coordinator is similar to the leader in the ZAB protocol. It is not that ZAB requires more than half, and 2PC requires all. The most fundamental difference is that 2PC is a consensus algorithm. ZAB should be a consensus algorithm like Paxos below.
-
30 40+ 3PC:
The difficulty of 3PC application lies in the setting of the timeout period. There are few 3PC applications on the market -
46 points + Paxos. Wikipedia: It should be noted that Paxos is often mistakenly referred to as a "consensus algorithm". But "consistency" and "consensus" are not the same concept. Paxos is a consensus algorithm.
The problem with the leader is that the leader load is high. If the leader fails at a single point, the entire cluster is unavailable, and the leader needs to be re-elected to restore availability.
- 100 45 ZAB, Raft were born from Paxos
"4.5.2 Zookeeper Distributed Lock Implementation"
- 23 7 Why does Reentrantlock have no shocking group effect, because AQS only has the longest waiting thread to compete with the thread that newly applies for the lock at this time (unfair lock). Therefore, the second way of using zk to implement distributed locks, that is, using temporary sequential nodes, actually uses a similar idea: queues.