Zookeeper distributed reliable coordination system

1. Get to know Zookeeper

Official website address: https://zookeeper.apache.org/
Zookeeper is an open source distributed Apache project that provides coordination services for distributed applications.

Zookeeper = file system + communication mechanism (working mechanism: observer mode)
特点:
  1)Zookeeper:一个领导者(Leader),多个跟随者(Follower)组成的集群。
  2)集群中只要有半数以上节点存活,Zookeeper集群就能正常服务。
  3)全局数据一致:每个Server保存一份相同的数据副本,Client无论连接到哪个Server,数据都是一致的。
  4)更新请求顺序进行,来自同一个Client的更新请求按其发送顺序依次执行。
  5)数据更新原子性,一次数据更新要么成功,要么失败。
  6)实时性,在一定时间范围内,Client能读到最新数据。

Application scenarios:

  • Unified naming service
  • Unified configuration management
  • Unified cluster management
  • Server nodes dynamically go online and offline
  • Soft load balancing

Election mechanism:

  • Half mechanism: More than half of the machines in the cluster survive and the cluster is available. Therefore, Zookeeper is suitable for installing an odd number of servers.
  • Although Zookeeper does not specify Leader and Follower in the configuration file. However, when Zookeeper is working, one node is the leader and the others are followers. The leader is temporarily generated through the internal election mechanism.
(1)服务器1启动,发起一次选举。服务器1投自己一票,此时服务器1票数一票,不够半数以上(3票),选举无法完成,服务器1状态保持为LOOKING;

(2)服务器2启动,再发起一次选举。服务器1和2分别投自己一票并交换选票信息,此时服务器1发现服务器2的ID比自己目前投票推举的(服务器1)大,则更改选票为推举服务器2,此时服务器1票数0票,服务器2票数2票,没有半数以上结果,选举无法完成,服务器1,2状态保持LOOKING;

(3)服务器3启动,发起一次选举。此时服务器1和2都会更改选票为服务器3,此次投票结果:服务器1为0票,服务器2为0票,服务器3为3票,此时服务器3的票数已经超过半数,服务器3当选Leader。服务器1,2更改状态为FOLLOWING,服务器3更改状态为LEADING;

(4)服务器4启动,发起一次选举。此时服务器1,2,3已经不是LOOKING状态,不会更改选票信息,交换选票信息结果:服务器3为3票,服务器4为1票。此时服务器4服从多数,更改选票信息为服务器3,并更改状态为FOLLOWING;

(5)服务器5启动,于4的过程一样,更改选票信息为服务器3,并更改状态为FOLLOWING。

小结:投票选举的过程与mtime-znode最后修改时间和配置id有关,先会找最后修改的Server当Leader,若一样,则会找配置id大的。

Node type:
Insert image description here

Listener principle:
Insert image description here

Write data process:
Insert image description here

2. Local mode installation (taking Windows system as an example, it is the same as Linux installation)

1. After ensuring that the JDK has been installed, download the installation package from the official website and extract it to the specified directory.
2. Configuration modification.

将conf路径下的zoo_sample.cfg修改为zoo.cfg

打开zoo.cfg文件,修改dataDir路径。(路径可以自己设置,方便查看,别忘记创建设置的文件夹)
dataDir=../data

3. Operate Zookeeper

windows直接执行bin目录下的zkServer.cmd启动服务端,执行zkCli.cmd启动客户端

linux
  启动服务端:bin/zkServer.sh start
  查看启动状态:bin/zkServer.sh status
  启动客户端:bin/zkCli.sh

3. Distributed installation (taking Linux as an example)

1. Download the compressed package from the official website and extract it to the specified directory.

tar -zxvf zookeeper-3.4.10.tar.gz -C /opt/module/

2. Synchronize content to other servers.

xsync zookeeper-3.4.10/

3. Configure the server number.

/opt/module/zookeeper-3.4.10/目录下创建zkData:
mkdir -p zkData

在zkData目录下创建一个myid文件:
touch myid
vi myid
添加与server对应的编号:2

拷贝配置好的zookeeper到其他机器上(注意修改编号,确保唯一):
xsync myid

4. Configure the zoo.cfg file

mv zoo_sample.cfg zoo.cfg
vi zoo.cfg
修改数据存储路径配置:
dataDir=/opt/module/zookeeper-3.4.10/zkData

增加如下配置:
#####################cluster############################
server.2=hadoop102:2888:3888
server.3=hadoop103:2888:3888
server.4=hadoop104:2888:3888

server. (server number) = (server ip address): (port for exchanging information between the server and the Leader server in the cluster): (port for servers to communicate with each other during elections)

4. Shell command operation

(1) Display all operation commands.

help

(2) View the content contained in the current znode.

ls /

(3) View detailed data of the current node.

ls2 /

(4) Create ordinary nodes.

create /ceshi "ceshi"

(5) Obtain the value of the node.

get /ceshi

(6) Create short-lived nodes.

create -e /ceshi "ceshi"

(7) Create a node with a serial number. (If there is no sequence number node originally, the sequence number starts from 0 and increases in sequence. If there are already 2 nodes under the original node, the reordering starts from 2, and so on)

create -s /ceshi "ceshi"

(8) Modify node data value.

set /ceshi "ceshi"

(9) Monitoring of node value changes.

get /ceshi watch

(10) Monitoring changes in the node’s child nodes (path changes).

ls /ceshi watch

(11) Delete node.

delete /ceshi 

(12) Recursively delete nodes.

rmr /ceshi 

(13) Check node status.

stat /ceshi 

5. Code implementation

1. Introduce dependencies

 <dependencies>
    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>4.11</version>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>log4j</groupId>
      <artifactId>log4j</artifactId>
      <version>1.2.12</version>
    </dependency>
    
    <!--zookeeper依赖-->
    <dependency>
      <groupId>org.apache.zookeeper</groupId>
      <artifactId>zookeeper</artifactId>
      <version>3.7.0</version>
    </dependency>
  </dependencies>

2. Add the log configuration file.
Create a new log4j.properties file in the project resources directory.

log4j.rootLogger=INFO,stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] - %m%n
log4j.appender.logfile=org.apache.log4j.FileAppender
log4j.appender.logfile.File=target/spring.log
log4j.appender.logfile.layout=org.apache.log4j.PatternLayout
log4j.appender.logfile.layout.conversionPattern=%d %p [%c] - %m%n

3. Write the ZookeeperTest test class

package com.yzs;

import org.apache.zookeeper.*;
import org.apache.zookeeper.data.Stat;
import org.junit.Before;
import org.junit.Test;

import java.io.IOException;
import java.util.List;

public class ZookeeperTest {
    
    
    private String connectString = "127.0.0.1:2181";
    private int sessionTimeout = 2000;
    private ZooKeeper zooKeeper;

    //连接zookeeper服务端
    @Before
    public void connectZookeeper() throws IOException {
    
    
        zooKeeper = new ZooKeeper(connectString, sessionTimeout, new Watcher() {
    
    
            @Override
            public void process(WatchedEvent watchedEvent) {
    
    
                List<String> children = null;
                try {
    
    
                    children = zooKeeper.getChildren("/", true);
                    for (String child : children){
    
    
                        System.out.println(child);
                    }
                } catch (KeeperException e) {
    
    
                    e.printStackTrace();
                } catch (InterruptedException e) {
    
    
                    e.printStackTrace();
                }
            }
        });
    }

    //创建子节点
    @Test
    public void createNode() throws KeeperException, InterruptedException {
    
    
        zooKeeper.create("/sanguo","sanguo".getBytes(), ZooDefs.Ids.OPEN_ACL_UNSAFE,CreateMode.PERSISTENT);
    }

    //获取子节点并监听数据变化
    @Test
    public void getChildrenAndWatch() throws KeeperException, InterruptedException {
    
    
        List<String> children = zooKeeper.getChildren("/", true);
        for (String child : children){
    
    
            System.out.println(child);
        }
        //延时阻塞
        Thread.sleep(Long.MAX_VALUE);
    }

    //判断是否存在节点
    @Test
    public void exist() throws KeeperException, InterruptedException {
    
    
        Stat stat = zooKeeper.exists("/sanguo", false);
        System.out.println(stat == null?"not exist":"exist");
    }

    //获取节点上的数据
    @Test
    public void getNode() throws KeeperException, InterruptedException {
    
    
        byte[] data = zooKeeper.getData("/sanguo", false, null);
        System.out.println(new String(data));
    }
}

Guess you like

Origin blog.csdn.net/fish332/article/details/118156605