"Chapter 4 [Expansion and Advanced] Distributed" in "Special Topic Four Service Transformation", "Section 4 Zookeeper Core Functions and Application Scenarios", "Section 5 Distributed Locks"

"4.4.1 Introduction to ZooKeeper"

Insert picture description here

  • 12 10 Application case of zookeeper:
    Insert picture description here

  • 13 15 zk's similar products:
    consul etcd (lightweight than zk) Doozer

  • 18 points stand-alone version installation reference and related official documents reference point:
    Insert picture description here
    start the server: bin/zkServer.sh start
    connect with the client:bin/zkCli.sh -server 127.0.0.1:2181

  • 24 45+ cli operation guide:
    Insert picture description here

  • 30 57+ Java api

  • 36 35 Third-party client: zkClient, Curator

"4.4.2 ZooKeeper Core Concept"

  • session:
    Insert picture description here

  • Data model:
    Insert picture description here

    • 6 5+ znode naming convention
    • 8 points znode node type:
      Insert picture description here
    • 14 5 znode data structure:
      Insert picture description here
    • 23 40 acl
    • 27 50 Time in zookeeper:
      Insert picture description here
  • 32 55+ watch monitoring mechanism
    Insert picture description here
    Insert picture description here

    • 51 25 The subscription method in ZkClient can continuously monitor

Features of 56 35+ zk:
Insert picture description here

"4.4.3 ZooKeeper Typical Application Scenarios"

  • 16 points naming service:
    Insert picture description here

  • 16 50 Master election:
    Insert picture description here
    Method 1: Competition to create a temporary node master, only one succeeds, all of them are watched
    Method 2: The smallest node method: The order of the temporary order of the servers node is determined by the order of the child nodes

  • 30 42+ Distributed queues:
    Insert picture description here

  • 34 55+ Distributed lock:
    Insert picture description here
    Insert picture description here

《4.4.4 ZooKeeper cluster》

  1. conf/zoo1.cfg is as follows. zoo2.cfg, zoo3.cfg are similar
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/var/lib/zk1
clientPort=2181
server.1=localhost:2881:3881
server.2=localhost:2882:3882
server.3=localhost:2883:3883
  1. Create a myid file under /var/lib/zk1, the content is 1.2 and 3 are similar. If the zk file is created by the zk user, and this folder belongs to root, you should also perform a series of operations to change the user and permissions:
    sudo chown -R zk:zk /var/lib/zk1
    chmod -R 775 /var/ lib/zk1

  2. Start one by one:java -cp zookeeper-3.4.10.jar:lib/*:conf org.apache.zookeeper.server.quorum.QuorumPeerMain conf/zoo1.cfg

  • 22 43 Monitoring command

  • 26 points + ZAB protocol:
    Insert picture description here
    Insert picture description here
    Insert picture description here
    Insert picture description here
    Insert picture description here
    zk is not available for leader election

"4.4.5 Detailed Distributed Consistency Protocol"

  • Undo log is required for rollback, redo log is required for submission

  • 15 Submitted in two stages (2PC) as shown below. Me: From the point of view of the protocol steps, 2PC is similar to the ZAB protocol. The coordinator is similar to the leader in the ZAB protocol. It is not that ZAB requires more than half, and 2PC requires all. The most fundamental difference is that 2PC is a consensus algorithm. ZAB should be a consensus algorithm like Paxos below.
    Insert picture description here
    Insert picture description here
    Insert picture description here
    Insert picture description here

  • 30 40+ 3PC:
    Insert picture description here
    The difficulty of 3PC application lies in the setting of the timeout period. There are few 3PC applications on the market

  • 46 points + Paxos. Wikipedia: It should be noted that Paxos is often mistakenly referred to as a "consensus algorithm". But "consistency" and "consensus" are not the same concept. Paxos is a consensus algorithm.

The problem with the leader is that the leader load is high. If the leader fails at a single point, the entire cluster is unavailable, and the leader needs to be re-elected to restore availability.
Insert picture description here

  • 100 45 ZAB, Raft were born from Paxos

"4.5.2 Zookeeper Distributed Lock Implementation"

  • 23 7 Why does Reentrantlock have no shocking group effect, because AQS only has the longest waiting thread to compete with the thread that newly applies for the lock at this time (unfair lock). Therefore, the second way of using zk to implement distributed locks, that is, using temporary sequential nodes, actually uses a similar idea: queues.

Guess you like

Origin blog.csdn.net/qq_23204557/article/details/112758463