Zookeeper basic operations

Build Zookeeper server

Deployment under windows

Download address: https://mirrors.cloud.tencent.com/apache/zookeeper/zookeeper-3.7.1/

Modify configuration file
Insert image description here

  • Open the conf directory, make zoo_sample.cfg a copy and name itzoo.cfg
  • Open zoo.cfg , modify dataDirthe path, and add a log dataLogDirpath

dataDir=…/data
dataLogDir=…/log

zoo.cfg configuration file description

 # zookeeper时间配置中的基本单位 (毫秒)
 tickTime=2000
 # 允许follower初始化连接到leader最大时长,它表示tickTime时间倍数 即:initLimit*tickTime
 initLimit=10
 # 允许follower与leader数据同步最大时长,它表示tickTime时间倍数 
 syncLimit=5
 #zookeper 数据存储目录及日志保存目录(如果没有指明dataLogDir,则日志也保存在这个文件中)
 dataDir=/tmp/zookeeper
 #对客户端提供的端口号
 clientPort=2181
 #单个客户端与zookeeper最大并发连接数
 maxClientCnxns=60
 # 保存的数据快照数量,之外的将会被清除
 autopurge.snapRetainCount=3
 #自动触发清除任务时间间隔,小时为单位。默认为0,表示不自动清除。
 autopurge.purgeInterval=1

Start Zookeeper
Insert image description here

Deployment under linux

Prerequisite: Since zookeeper is developed using the java language, you must first install and configure the java environment on your local machine before installing zookeeper!

  • Upload zookeeper
    Insert image description here
  • Unzip zookeeper
    Insert image description here
  • The configuration conf
    is almost the same as that of windows

#The basic unit inside zookeeper, the unit is milliseconds. This means that a tickTime is 2000 milliseconds. In other configurations of zookeeper, the conversion is based on tickTime tickTime
=2000
#The follower server (F) and leader server ( The maximum number of heartbeats (number of tickTimes) that can be tolerated during the initial connection between L).
initLimit=10
#syncLimit: The maximum number of heartbeats (the number of tickTimes) that can be tolerated between the request and response between the follower server (F) and the leader server (L) in the cluster syncLimit=5 #Data storage
folder
, zookeeper running process There are two data that need to be stored, one is snapshot data (persistent data) and the other is transaction log
dataDir=/tmp/zookeeper
#Client access port
clientPort=2181

Configure environment variables
vim /etc/profile

export ZOOKEEPER_PREFIX=/root/software/apache-zookeeper-3.7.1-bin
export PATH=$PATH:$ZOOKEEPER_PREFIX/bin

Execute the following command to make the configuration take effect

source profile

Start service

zkServer.sh start

You can see that our zkServer has been started.
You can check the startup status:

zkServer.sh status

client connection

zkCli.sh

There is a built-in /zookeeper sub-node in the root directory, which saves Zookeeper's quota management information. Do not delete it easily.
Insert image description here

Zookeeper command operations

Zookeeper 数据模型
ZooKeeper is a tree directory service. Its data model is very similar to the Unix file system directory tree and has a hierarchical structure.
Insert image description here
Each node in Zookeeper is called: ZNode, and each node will save its own data and node information.
Insert image description here
A node can have child nodes, and a small amount (1MB) of data is allowed to be stored under the node.

Nodes can be divided into four major categories:

  • PERSISTENT persistence node
  • EPHEMERAL Ephemeral nodes: -e
  • PERSISTENT_SEQUENTIAL persistent sequential node: -s
  • EPHEMERAL_SEQUENTIAL Ephemeral sequential node: -es

Zookeeper server common commands
Insert image description here
Start ZooKeeper service

./zkServer.sh start

• View ZooKeeper service status

./zkServer.sh status

• Stop the ZooKeeper service

./zkServer.sh stop 

• Restart the ZooKeeper service

./zkServer.sh restart 

Zookeeper client common commands

Basic CRUD

  • Connect to Zookeeper client
# 本地连接
zkCli.sh
# 远程连接
zkCli.sh -server ip:2181
  • Disconnect
quit
  • View command help
help
  • Display the nodes under the specified directory
# ls 目录
ls /
  • Create node
# create /节点path value
[zk: localhost:2181(CONNECTED) 0] ls /
[zookeeper]
[zk: localhost:2181(CONNECTED) 1] create /app1 yuyang123
Created /app1
[zk: localhost:2181(CONNECTED) 2] ls /
[app1, zookeeper]
[zk: localhost:2181(CONNECTED) 3] create /app2
Created /app2
[zk: localhost:2181(CONNECTED) 4] ls /
[app1, app2, zookeeper]
  • Get node value
# get /节点path
[zk: localhost:2181(CONNECTED) 15] get /app1
yuyang123
[zk: localhost:2181(CONNECTED) 16] get /app2
null
  • Set node value
# set /节点path value
[zk: localhost:2181(CONNECTED) 17] set /app2 yuyang456
[zk: localhost:2181(CONNECTED) 18] get /app2
yuyang456
  • Delete a single node
# delete /节点path
[zk: localhost:2181(CONNECTED) 19] delete /app2
[zk: localhost:2181(CONNECTED) 20] get /app2
Node does not exist: /app2
[zk: localhost:2181(CONNECTED) 21] ls /
[app1, zookeeper]
  • Delete a node with children
# deleteall /节点path
[zk: localhost:2181(CONNECTED) 22] create /app1
Node already exists: /app1
[zk: localhost:2181(CONNECTED) 23] create /app1/p1
Created /app1/p1
[zk: localhost:2181(CONNECTED) 24] create /app1/p2
Created /app1/p2
[zk: localhost:2181(CONNECTED) 25] delete /app1
Node not empty: /app1
[zk: localhost:2181(CONNECTED) 26] deleteall /app1
[zk: localhost:2181(CONNECTED) 27] ls /
[zookeeper]

Create temporary & sequential nodes

  • Create temporary nodes (-e)
    • Temporary nodes are automatically deleted after the session ends.
# create -e /节点path value
[zk: localhost:2181(CONNECTED) 29] create -e /app1 yuyang123
Created /app1
[zk: localhost:2181(CONNECTED) 30] get /app1
yuyang123
[zk: localhost:2181(CONNECTED) 31] quit

# 退出后再次连接,临时节点已经删除
[zk: localhost:2181(CONNECTED) 0] ls /
[zookeeper]
  • Create sequential nodes (-s)
    • The created nodes, according to the order, will have a value after the node. The later the execution value, the larger the value. It is suitable for the application scenario of distributed lock - monotonically increasing.
# create -s /节点path value
[zk: localhost:2181(CONNECTED) 0] ls /
[zookeeper]
[zk: localhost:2181(CONNECTED) 1] create -s /app2
Created /app20000000003
[zk: localhost:2181(CONNECTED) 2] ls /
[app20000000003, zookeeper]
[zk: localhost:2181(CONNECTED) 3] create -s /app2 
Created /app20000000004
[zk: localhost:2181(CONNECTED) 4] ls /
[app20000000003, app20000000004, zookeeper]
[zk: localhost:2181(CONNECTED) 5] create -s /app2 
Created /app20000000005
[zk: localhost:2181(CONNECTED) 6] ls /
[app20000000003, app20000000004, app20000000005, zookeeper]

# 创建临时顺序节点
[zk: localhost:2181(CONNECTED) 7] create -es /app3
Created /app30000000006
[zk: localhost:2181(CONNECTED) 8] ls /
[app20000000003, app20000000004, app20000000005, app30000000006, zookeeper]
# 退出
[zk: localhost:2181(CONNECTED) 9] quit

# 重新链接,临时顺序节点已经被删除
[zk: localhost:2181(CONNECTED) 0] ls /
[app20000000003, app20000000004, app20000000005, zookeeper]
  • Query node details
# ls –s /节点path 
[zk: localhost:2181(CONNECTED) 5] ls / -s
[app20000000003, app20000000004, app20000000005, zookeeper]
cZxid = 0x0
ctime = Thu Jan 01 08:00:00 CST 1970
mZxid = 0x0
mtime = Thu Jan 01 08:00:00 CST 1970
pZxid = 0x14
cversion = 10
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 0
numChildren = 4
  • czxid: transaction ID where the node was created
  • ctime: creation time
  • mzxid: The last updated transaction ID
  • mtime: modification time
  • pzxid: The transaction ID of the last updated child node list
  • cversion: version number of the child node
  • dataversion: data version number
  • aclversion: permission version number
  • ephemeralOwner: used for temporary nodes, representing the transaction ID of the temporary node, 0 if it is a persistent node
  • dataLength: The length of data stored in the node
  • numChildren: the number of child nodes of the current node

Zookeeper JavaAPI operations

Introduction to Curator
Curator is a set of zookeeper client frameworks open sourced by Netflix. Curator is the client framework that best supports Zookeeper. Curator encapsulates most of Zookeeper's functions, such as Leader election, distributed locks, etc., reducing the development work of technical personnel when using Zookeeper.

The Curator framework mainly solves three types of problems:

  • Encapsulates the connection processing between ZooKeeper Client and ZooKeeper Server (providing connection retry mechanism, etc.).
  • Provides a set of Fluent style API, and has enhanced it based on the Java client's native API (creating multi-layer nodes, deleting multi-layer nodes, etc.).
  • Provides abstract encapsulation of various ZooKeeper application scenarios (distributed locks, leader election, shared counters, distributed queues, etc.).

Introducing Curator

<!--curator-->
        <dependency>
            <groupId>org.apache.curator</groupId>
            <artifactId>curator-framework</artifactId>
            <version>4.0.0</version>
        </dependency>

        <dependency>
            <groupId>org.apache.curator</groupId>
            <artifactId>curator-recipes</artifactId>
            <version>4.0.0</version>
        </dependency>
        <!--日志-->
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-api</artifactId>
            <version>1.7.21</version>
        </dependency>

        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
            <version>1.7.21</version>
        </dependency>

establish connection

Way 1

public class CuratorTest {
    
    

    /**
     * 建立连接
     */
    @Test
    public void testConnect(){
    
    

        /**
         * String connectString     连接字符串。 zk地址和端口: "192.168.58.100:2181,192.168.58.101:2181"
         * int sessionTimeoutMs     会话超时时间 单位ms
         * int connectionTimeoutMs  连接超时时间 单位ms
         * RetryPolicy retryPolicy  重试策略
         */
        //1. 第一种方式

        //重试策略 baseSleepTimeMs 重试之间等待的初始时间,maxRetries 重试的最大次数
        RetryPolicy retryPolicy = new ExponentialBackoffRetry(3000,10);

        CuratorFramework client = CuratorFrameworkFactory.newClient("192.168.58.100:2181", 60 * 1000,
                15 * 1000, retryPolicy);

        //开启连接
        client.start();

    }
}

Retry strategy

  • RetryNTimes: There is no limit to the number of retries
  • RetryOneTime: Only retries with no limit on the number of times. It is generally not commonly used.
  • ExponentialBackoffRetry: Retry strategy that only retries once

Way 2

public class CuratorTest {
    
    

    private CuratorFramework client;

    /**
     * 建立连接
     */
    @Test
    public void testConnect(){
    
    

        /**
         * String connectString     连接字符串。 zk地址和端口: "192.168.58.100:2181,192.168.58.101:2181"
         * int sessionTimeoutMs     会话超时时间 单位ms
         * int connectionTimeoutMs  连接超时时间 单位ms
         * RetryPolicy retryPolicy  重试策略
         */
        //1. 第一种方式

        //重试策略 baseSleepTimeMs 重试之间等待的初始时间,maxRetries 重试的最大次数
        RetryPolicy retryPolicy = new ExponentialBackoffRetry(3000,10);

//      client   = CuratorFrameworkFactory.newClient("192.168.58.100:2181", 60 * 1000,
//                15 * 1000, retryPolicy);

        //2. 第二种方式,建造者方式创建
        client = CuratorFrameworkFactory.builder()
                .connectString("192.168.58.100:2181")
                .sessionTimeoutMs(60*1000)
                .connectionTimeoutMs(15 * 1000)
                .retryPolicy(retryPolicy)
                .namespace("yuyang")  //根节点名称设置
                .build();

        //开启连接
        client.start();
    }
}
  • Add node
    Modify testConnect annotation, @Before
 /**
     * 建立连接
     */
    @Before
    public void testConnect()

Create node: create persistent temporary sequential data

public class CuratorTest {
    
    
    /**
     * 创建节点 create 持久 临时 顺序 数据
     */
    //1.创建节点
    @Test
    public void testCreate1() throws Exception {
    
    

        // 如果没有创建节点,没有指定数据,则默认将当前客户端的IP 作为数据存储
        String path = client.create().forPath("/app1");
        System.out.println(path);
    }

    @After
    public void close(){
    
    
        client.close();
    }
}

	//2.创建节点 带有数据
    @Test
    public void testCreate2() throws Exception {
    
    
        String path = client.create().forPath("/app2","hehe".getBytes());
        System.out.println(path);
    }
    
    //3.设置节点类型 默认持久化
    @Test
    public void testCreate3() throws Exception {
    
    
        //设置临时节点
        String path = client.create().withMode(CreateMode.EPHEMERAL).forPath("/app3");
        System.out.println(path);
    }
    
//1.查询数据 getData
@Test
public void testGet1() throws Exception {
    
    
    byte[] data = client.getData().forPath("/app1");
    System.out.println(new String(data));
}

//2.查询子节点 getChildren()
@Test
public void testGet2() throws Exception {
    
    
    List<String> path = client.getChildren().forPath("/");
    System.out.println(path);
}

//3.查询节点状态信息
@Test
public void testGet3() throws Exception {
    
    
    Stat status = new Stat();
    System.out.println(status);
    //查询节点状态信息: ls -s
    client.getData().storingStatIn(status).forPath("/app1");
    System.out.println(status);
}
  • Modify node
    //1. 基本数据修改
    @Test
    public void testSet() throws Exception {
    
    
        client.setData().forPath("/app1","hahaha".getBytes());
    }

    //根据版本修改(乐观锁)
    @Test
    public void testSetVersion() throws Exception {
    
    
        //查询版本
        Stat status = new Stat();
        //查询节点状态信息: ls -s
        client.getData().storingStatIn(status).forPath("/app1");
        int version = status.getVersion();
        System.out.println(version);  //2

        client.setData().withVersion(version).forPath("/app1","hehe".getBytes());
    }
  • Delete node
  //1.删除单个节点
    @Test
    public void testDelete1() throws Exception {
    
    

        client.delete().forPath("/app4");
    }

    //删除带有子节点的节点
    @Test
    public void testDelete2() throws Exception {
    
    

        client.delete().deletingChildrenIfNeeded().forPath("/app4");
    }

    //必须删除成功(超时情况下,重试删除)
    @Test
    public void testDelete3() throws Exception {
    
    

        client.delete().guaranteed().forPath("/app2");
    }

    //回调 删除完成后执行
    @Test
    public void testDelete4() throws Exception {
    
    

        client.delete().guaranteed().inBackground((curatorFramework, curatorEvent) -> {
    
    
            System.out.println("我被删除了");
            System.out.println(curatorEvent);
        }).forPath("/app1");
    }

Watch event monitoring

ZooKeeper allows users to register some Watchers on designated nodes, and when certain events are triggered, the ZooKeeper server will notify interested clients of the event. This mechanism is an important feature of ZooKeeper in implementing distributed coordination services.

The Watcher mechanism is introduced in ZooKeeper to implement the publish/subscribe function, which allows multiple subscribers to monitor an object at the same time. When an object's own status changes, all subscribers will be notified.
Insert image description here
The zkCli client uses watch
to add the -w parameter to monitor changes in nodes and sub-nodes in real time and receive notifications in real time. It is very suitable for ensuring data consistency in distributed situations.

Its usage is as follows

Order describe
ls -w path Monitor changes in child nodes (addition, deletion) [Listening directory]
get -w path Monitor node data changes
stat -w path Listen for changes in node properties

Zookeeper event types

  • NodeCreated: node creation
  • NodeDeleted: Node deleted
  • NodeDataChanged: Node data changes
  • NodeChildrenChanged: The list of child nodes changes
  • DataWatchRemoved: Node monitoring is removed
  • ChildWatchRemoved: Child node monitoring is removed

1) get -w path monitors node data changes
2) ls -w /path monitors changes in child nodes (addition, deletion) [monitoring directory]
3) ls -R -w /path Example 2 loop recursive monitoring

The curator client uses watch

ZooKeeper natively supports event monitoring by registering Watchers, but its use is not particularly convenient and requires developers to register Watchers repeatedly, which is cumbersome.

Curator introduces Cache to monitor ZooKeeper server events.

ZooKeeper provides three Watchers:

  • NodeCache: just listen to a specific node
  • PathChildrenCache: Monitors the child nodes of a ZNode.
  • TreeCache: can monitor all nodes on the entire tree, similar to the combination of PathChildrenCache and NodeCache

1) watch monitors NodeCache

public class CuratorWatchTest {
    
    
		/**
     * 演示 NodeCache : 给指定一个节点注册监听
     */
    @Test
    public void testNodeCache() throws Exception {
    
    
        //1. 创建NodeCache对象
        NodeCache nodeCache = new NodeCache(client, "/app1");  //监听的是 /yuyang和其子目录app1

        //2. 注册监听
        nodeCache.getListenable().addListener(new NodeCacheListener() {
    
    
            @Override
            public void nodeChanged() throws Exception {
    
    
                System.out.println("节点变化了。。。。。。");
                //获取修改节点后的数据
                byte[] data = nodeCache.getCurrentData().getData();
                System.out.println(new String(data));
            }
        });
        //3. 设置为true,开启监听
        nodeCache.start(true);
        while(true){
    
    

        }
    } 
}

2) watch monitors PathChildrenCache

    /**
     * 演示 PathChildrenCache: 监听某个节点的所有子节点
     */
    @Test
    public void testPathChildrenCache() throws Exception {
    
    

        //1.创建监听器对象 (第三个参数表示缓存每次节点更新后的数据)
        PathChildrenCache pathChildrenCache = new PathChildrenCache(client, "/app2", true);

        //2.绑定监听器
        pathChildrenCache.getListenable().addListener(new PathChildrenCacheListener() {
    
    
            @Override
            public void childEvent(CuratorFramework curatorFramework, PathChildrenCacheEvent pathChildrenCacheEvent) throws Exception {
    
    
                System.out.println("子节点发生变化了。。。。。。");
                System.out.println(pathChildrenCacheEvent);

                if(PathChildrenCacheEvent.Type.CHILD_UPDATED == pathChildrenCacheEvent.getType()){
    
    
                    //更新子节点
                    System.out.println("子节点更新了!");
                    //在一个getData中有很多数据,我们只拿data部分
                    byte[] data = pathChildrenCacheEvent.getData().getData();
                    System.out.println("更新后的值为:" + new String(data));

                }else if(PathChildrenCacheEvent.Type.CHILD_ADDED == pathChildrenCacheEvent.getType()){
    
    
                    //添加子节点
                    System.out.println("添加子节点!");
                    String path = pathChildrenCacheEvent.getData().getPath();
                    System.out.println("子节点路径为: " + path);

                }else if(PathChildrenCacheEvent.Type.CHILD_REMOVED == pathChildrenCacheEvent.getType()){
    
    
                    //删除子节点
                    System.out.println("删除了子节点");
                    String path = pathChildrenCacheEvent.getData().getPath();
                    System.out.println("子节点路径为: " + path);
                }
            }
        });

        //3. 开启
        pathChildrenCache.start();

        while(true){
    
    

        }
    }

Event object information analysis

PathChildrenCacheEvent{
    
    
	type=CHILD_UPDATED, 
	data=ChildData
	{
    
    
		path='/app2/m1', 
		stat=164,166,1670114647087,1670114698259,1,0,0,0,3,0,164, 
		data=[49, 50, 51]
	}
}

Insert image description here
3) watch monitors TreeCache

TreeCache is equivalent to the combined version of NodeCache (only monitors the current node) + PathChildrenCache (only monitors the child nodes), that is, it monitors the current and child nodes.

  /**
     * 演示 TreeCache: 监听某个节点的所有子节点
     */
    @Test
    public void testCache() throws Exception {
    
    

        //1.创建监听器对象
        TreeCache treeCache = new TreeCache(client, "/app2");

        //2.绑定监听器
        treeCache.getListenable().addListener(new TreeCacheListener() {
    
    
            @Override
            public void childEvent(CuratorFramework curatorFramework, TreeCacheEvent treeCacheEvent) throws Exception {
    
    
                System.out.println("节点变化了");
                System.out.println(treeCacheEvent);

                if(TreeCacheEvent.Type.NODE_UPDATED == treeCacheEvent.getType()){
    
    
                    //更新节点
                    System.out.println("节点更新了!");
                    //在一个getData中有很多数据,我们只拿data部分
                    byte[] data = treeCacheEvent.getData().getData();
                    System.out.println("更新后的值为:" + new String(data));

                }else if(TreeCacheEvent.Type.NODE_ADDED == treeCacheEvent.getType()){
    
    
                    //添加子节点
                    System.out.println("添加节点!");
                    String path = treeCacheEvent.getData().getPath();
                    System.out.println("子节点路径为: " + path);

                }else if(TreeCacheEvent.Type.NODE_REMOVED == treeCacheEvent.getType()){
    
    
                    //删除子节点
                    System.out.println("删除节点");
                    String path = treeCacheEvent.getData().getPath();
                    System.out.println("删除节点路径为: " + path);
                }
            }
        });

        //3. 开启
        treeCache.start();

        while(true){
    
    

        }
    }

One-time monitoring method: Watcher
uses Watcher to monitor nodes. It can be used in typical business scenarios and can be considered, but it is not recommended in general.

public class CuratorWatchTest {
    
    
	@Autowired
    private CuratorFramework client;

    /**
     * 建立连接
     */
    @Before
    public void testConnect(){
    
    

        /**
         * String connectString,  连接字符串 zk地址 端口: "192.168.58.100:2181,,,,"
         * int sessionTimeoutMs,  会话超时时间
         * int connectionTimeoutMs,  连接超时时间
         * RetryPolicy retryPolicy   重试策略
         */
        //1. 第一种方式
        RetryPolicy retryPolicy =new ExponentialBackoffRetry(3000,10);

        //2. 第二种方式
        client = CuratorFrameworkFactory.builder()
                .connectString("192.168.58.100:2181")
                .sessionTimeoutMs(60*1000)
                .connectionTimeoutMs(15*1000)
                .retryPolicy(retryPolicy)
                .namespace("yuyang")  //当前程序创建目录的根目录
                .build();

        client.start();
    }

    /**
     * 演示一次性监听
     */
    @Test
    public  void testOneListener() throws Exception {
    
    

        byte[] data = client.getData().usingWatcher(new Watcher() {
    
    
            @Override
            public void process(WatchedEvent watchedEvent) {
    
    
                System.out.println("监听器 watchedEvent: " + watchedEvent);
            }
        }).forPath("/test");

        System.out.println("监听节点内容:" + new String(data));

        while(true){
    
    

        }
    }
    @After
    public void close(){
    
    
        client.close();
    }
}

The above code registers a Watcher listening event for the /test node and returns the content of the current node. Two data changes were made later. In fact, during the second change, the monitoring had expired and the node change events could not be obtained again.

Curator event listening mechanism
ZooKeeper natively supports event monitoring by registering Watchers, but its use is not particularly convenient and requires developers to register Watchers repeatedly, which is cumbersome.

Curator introduces Cache to monitor ZooKeeper server events.

ZooKeeper provides three Watchers:

  • NodeCache: just listen to a specific node
  • PathChildrenCache: Monitors the child nodes of a ZNode.
  • TreeCache: can monitor all nodes on the entire tree, similar to the combination of PathChildrenCache and NodeCache

Transaction & Asynchronous Operation Demonstration

Instances of CuratorFramework include the inTransaction() interface method. Call this method to start a ZooKeeper transaction.

You can combine create, setData, check, and/or delete operations and then call commit() to submit as an atomic operation.

 /**
	* 事务操作
	*/	
@Test
public void TestTransaction() throws Exception {
    
    

  //1. 创建Curator对象,用于定义事务操作
  CuratorOp createOp = client.transactionOp().create().forPath("/app3", "app1-data".getBytes());
  CuratorOp setDataOp = client.transactionOp().setData().forPath("/app2", "app2-data".getBytes());
  CuratorOp deleteOp = client.transactionOp().delete().forPath("/app2");

  //2. 添加事务操
  Collection<CuratorTransactionResult> results = client.transaction().forOperations(createOp, setDataOp, deleteOp);

  //3. 遍历事务操作结果
  for (CuratorTransactionResult result : results) {
    
    
    System.out.println(result.getForPath() + " - " + result.getType());
  }
}

Asynchronous operations

The previously mentioned additions, deletions, modifications and queries are all synchronous, but Curator also provides an asynchronous interface and introduces the BackgroundCallback interface to process the result information returned by the server after the asynchronous interface is called.

An important callback value in the BackgroundCallback interface is CuratorEvent, which contains event type, response code and node details.

// 异步操作
    @Test
    public void TestAsync() throws Exception {
    
    
        while(true){
    
    
            // 异步获取子节点列表
            GetChildrenBuilder builder = client.getChildren();
            builder.inBackground(new BackgroundCallback() {
    
    
                @Override
                public void processResult(CuratorFramework curatorFramework, CuratorEvent curatorEvent) throws Exception {
    
    
                    System.out.println("子节点列表:" + curatorEvent.getChildren());
                }
            }).forPath("/");
            TimeUnit.SECONDS.sleep(5);
        }
    }

Zookeeper permission control

Introduction to zk permission control

As a distributed coordination framework, Zookeeper internally stores some data on the running status of the distributed system, such as master election and distributed locks. Operations on these data will directly affect the operating status of the distributed system. Therefore, in order to ensure the security of data in zookeeper and avoid the impact of misoperation. Zookeeper provides a set of ACL permission control mechanisms to ensure data security.

ACL permission control, use: scheme:id:permto identify.

  • Scheme (permission mode), identifies the authorization strategy
  • ID (authorization object)
  • Permission: the permission granted

ZooKeeper's permission control is based on each znode node. Permissions need to be set for each node. Each znode supports setting multiple permission control schemes and multiple permissions. Child nodes will not inherit the permissions of the parent node, and the client will not have access rights. A node, but its child nodes may be accessible.

Scheme permission mode

Zookeeper provides the following permission modes. The so-called permission mode refers to the method used for authorization.

  • world: the default mode, which is equivalent to being accessible to all.

  • auth : represents an authenticated user

    In the cli, you can addauth digest user:pwdadd authorized users in the current context by

  • digest : This is the authentication method of username:password, which is also the most commonly used in business systems.

    Use the username:password string to generate an MD5 string, which is then used as the ACL ID. Authentication is performed by sending username:password in clear text. When used in an ACL, the expression is username:base64 , where base64 is the encoding of the SHA1 digest of the password.

  • ip : Do permission control through ip address

    For example, ip:192.168.1.1 means that the authority control is all for this ip address. It can also target the network segment ip: 192.168.1.1/24. At this time, the valid bits in addr are compared with the valid bits in the client addr.

ID authorization object

Refers to the user or a designated entity to whom permission is granted. Under different permission modes, the authorization objects are different.

Id ipId = new Id("ip", "192.168.58.100");
Id ANYONE_ID_UNSAFE = new Id("world", "anyone");

3.4 Permission permission type

Refers to the operations that can be allowed after passing the permission check, create /delete /read/write/admin

  • Create allows Create operations on child nodes
  • Read allows GetChildren and GetData operations on this node
  • Write allows the SetData operation of this node
  • Delete allows the Delete operation on child nodes
  • Admin allows the setAcl operation on this node

The permission mode (Schema) and authorization object are mainly used to confirm the verification strategy used in the permission verification process:

For example, IP address, digest:username:password , after matching the verification policy and the verification is successful, the current client's access permissions are determined based on the permission operation type.

Perform operations on the console

ACL-related commands are provided in Zookeeper

getAcl        getAcl <path>     读取ACL权限
setAcl        setAcl <path> <acl>     设置ACL权限
addauth      addauth <scheme> <auth>     添加认证用户

1) Word mode
After creating a node, the default is the world mode

[zk: localhost:2181(CONNECTED) 6] create /auth
Created /auth

[zk: localhost:2181(CONNECTED) 7] getAcl /auth
'world,'anyone
: cdrwa

[zk: localhost:2181(CONNECTED) 8] create /auth2
Created /auth2

[zk: localhost:2181(CONNECTED) 9] getAcl /auth2
'world,'anyone
: cdrwa

[zk: localhost:2181(CONNECTED) 10] 

Among them, cdrwa corresponds to create . delete read write admin

2) IP method

In ip mode, the first command to connect to zkServer needs to use the following method

zkCli.sh -server 127.0.0.1:2181 

Then follow the IP method as follows

[zk: 127.0.0.1:2181(CONNECTED) 0] create /ip-model
Created /ip-model

[zk: 127.0.0.1:2181(CONNECTED) 1] setAcl /ip-model ip:127.0.0.1:cdrwa

[zk: 127.0.0.1:2181(CONNECTED) 3] getAcl /ip-model
'ip,'127.0.0.1
: cdrwa

3) Auth mode

The operation of auth mode is as follows.

[zk: 127.0.0.1:2181(CONNECTED) 5] create /spike
Created /spike

[zk: 127.0.0.1:2181(CONNECTED) 6] addauth digest spike:123456

[zk: 127.0.0.1:2181(CONNECTED) 9] setAcl /spike auth:spike:cdrwa

[zk: 127.0.0.1:2181(CONNECTED) 10] getAcl /spike
'digest,'spike:pPeKgz2N9Xc8Um6wwnzFUMteLxk=
: cdrwa

When we log out of the current session, connect again, and perform the following operations, it will prompt that there is no permission

[zk: localhost:2181(CONNECTED) 0] get /spike
Insufficient permission : /spike 

At this time, we need to re-authorize.

[zk: localhost:2181(CONNECTED) 1] addauth digest spike:123456
[zk: localhost:2181(CONNECTED) 2] get /spike
null 

**4) Digest mode**

Using the syntax, you will find that the usage is the same as the Auth mode

setAcl /digest digest:用户名:密码:权限

But there is a difference, the password needs to be encrypted, otherwise it cannot be recognized.

Password: The encrypted string of username and password.

Use the following program to generate a password

public class TestAcl {
    
    

    @Test
    public void createPw() throws NoSuchAlgorithmException {
    
    
        String up = "yuyang:yuyang";
        byte[] digest = MessageDigest.getInstance("SHA1").digest(up.getBytes());
        String encodeStr = Base64.getEncoder().encodeToString(digest);
        System.out.println(encodeStr);
    }
}

Get: 5FAC7McRhLdx0QUWsfEbK8pqwxc=

Go back to the client and perform the following operations

[zk: localhost:2181(CONNECTED) 14] create /digest
Created /digest

[zk: localhost:2181(CONNECTED) 15] setAcl /digest digest:yuyang:5FAC7McRhLdx0QUWsfEbK8pqwxc=:cdrwa

[zk: localhost:2181(CONNECTED) 16] getAcl /digest
'digest,'yuyang:5FAC7McRhLdx0QUWsfEbK8pqwxc=: cdrwa

After exiting the current session, authorization is required again to access the **/digest** node

[zk: localhost:2181(CONNECTED) 0] get /digest
Insufficient permission : /digest

[zk: localhost:2181(CONNECTED) 1] addauth digest yuyang:yuyang

[zk: localhost:2181(CONNECTED) 2] get /digest
null

Guess you like

Origin blog.csdn.net/Forbidden_City/article/details/132070038