Distributed service registration - Zookeeper

1. Zookeeper configuration file

  • tickTime:Client-Server communication time
  • Description : The time interval between Zookeeper servers or between the client and the server to maintain the heartbeat, that is, a heartbeat will be sent every tickTime. tickTime is in milliseconds,
  • initLimit:Leader-Follower initial communication time limit
  • Description : The maximum number of heartbeats (the number of tickTimes) that can be tolerated during the initial connection between the follower server and the leader server in the cluster. Time is initLimitxtickTime
  • syncLimit: Leader-Follower synchronization communication time limit.
  • Description : The maximum number of heartbeats (the number of tickTimes) that can be tolerated between the request and response after the initialization between the follower server and the leader server in the cluster. Time is initLimitxtickTime
  • dataDir: Data file directory,
  • Description : The directory where Zookeeper saves data. By default, Zookeeper also saves the log files for writing data in this directory.
  • clientPort: Client connection port
  • Note : The port where the client connects to the Zookeeper server. Zookeeper will listen to this port and accept the client's access request.
  • 服务器名称与地址: Cluster information (server number, server address, LF communication port, election port)
  • Note : The writing format of this configuration item is special, and the rules are as follows: server.N=YYY:A:B
  • maxClientCnxns: For a client's connection limit, the default is 60 , which is sufficient for most of the time. However, in our actual use, we found that this number is often exceeded in the test environment. After investigation, it was found that some teams deployed dozens of applications to one machine to facilitate testing, so this number was exceeded.
  • autopurge.snapRetainCount, autopurge.purgeInterval:-- The client will generate a lot of logs during the interaction with zookeeper, and zookeeper will also save the data in the memory as a snapshot, these data will not be automatically deleted, so that the data in the disk will be More and more. However, you can set these two parameters to let zookeeper automatically delete data.
  • Description : autopurge.purgeInterval is to set how many hours to clean up once. And autopurge.snapRetainCount is to set how many snapshots to keep, and the previous ones are deleted.
  • Distributed installation
# server.A=B.C.D
server.1=itcast05:2888:3888
server.2=itcast06:2888:3888
server.3=itcast07:2888:3888
  • Parameter interpretation:
  • A is a number, indicating which server is this number
  • B is the ip address of this server
  • C is the port for this server to exchange information with the Leader server in the cluster.
  • D means that in case the Leader server in the cluster hangs up, a port is needed to re-elect and select a new leader. This port is used to perform the election and is the port through which the servers communicate with each other.
  • The myid file is created under the data directory of Zookeeper, and the myid value of each server is set in it, such as the first: 1, the second: 2, the third: 3, etc. Zookeeper starts to read the secondary file and get it inside The data is compared with the configuration information in zoo.cfg to determine which server it is.
  • The value of myid is the value of server.A item A defined in the zoo.cfg file. Zookeeper will read this file when it starts, and compare the data in it with the configuration information in zoo.cfg to determine which server it is. An identification function.

2. Zookeeper election mechanism,

2.1, half of the mechanism

  • More than half of the machines in the cluster survive and the cluster is available, so Zookeeper is suitable for installing an odd number of servers.
  • Although Zookeeper does not specify Master and Slave in the configuration file, when Zookeeper works, one node is Leader, and the other nodes are Followers. Leaders are temporarily generated through the internal election mechanism.
    Insert picture description here
  • Suppose there are five zookeeper servers, one will start when it starts, and when the first one starts, it will start voting, but you can only vote for yourself (< 3), so you can’t be the leader, but when you can’t It can only vote for a server with a large id, so when the third server starts, more than half of the votes from the first two servers are added, so the third server becomes the leader. As long as the Leader is determined, the server behind can only be a Follower.

3. The type of node

3.1, Persistent

  • Persistent node directory : After the client and server are disconnected, the created node will not be deleted
  • Persistent sequence number directory node : After the client disconnects from Zookeeper, the node still exists, but Zookeeper gives the node name sequential number ( Note : Set the sequence identifier when creating the znode, and a value and sequence number will be appended to the znode name Is a monotonically increasing counter, maintained by the parent node)
  • Note : In a distributed system, the sequence number can be used to sort all events globally, so that the client can infer the sequence of events from the sequence number, such as which server goes up first or which server has a problem, etc. .

3.2. Ephemeral

  • Temporary directory node: After the client and the server are disconnected, the created node is deleted by itself.
  • Temporary sequential numbering of directory nodes: After the client and the server are disconnected, the created node is deleted by itself. Only Zookeeper gives the node name a sequential number.

4. Client shell commands

  • ls /: View node
  • ls2 / : View node details
  • get path: View the value of the node
  • create -e path : Create a temporary node
  • create -s path: Create sequence number node
  • set path data: Modify the value of the node
  • get path data watch: Monitor a certain value change of the node
  • ls path watch: Monitoring node changes (path)
  • delete path: Delete node
  • rmr path: Recursively delete nodes

5. Salt structure

Insert picture description here
Insert picture description here
Insert picture description here

6. Monitoring principle

Insert picture description here

7. Data writing process

Insert picture description here

Guess you like

Origin blog.csdn.net/JISOOLUO/article/details/105754390