Zookeeper 从 0 到 1 学习 ——第四章 Zookeeper实战

1. 客户端命令操作

命令基本语法 功能描述
help 显示所有操作命令
ls path [watch] 使用 ls 命令来查看当前znode中所包含的内容
ls2 path [watch] 查看当前节点数据并能看到更新次数等数据
create 普通创建 -s 含有序列 -e 临时(重启或者超时消失)
get path [watch] 获得节点的值
set 设置节点的具体值
stat 查看节点状态
delete 删除节点
rmr 递归删除节点
  1. 启动客户端

    [dwjf321@hadoop103 zookeeper-3.4.10]$ bin/zkCli.sh
    
  2. 显示所有操作命令

    [zk: localhost:2181(CONNECTED) 1] help
    
  3. 查看当前znode中所包含的内容

    [zk: localhost:2181(CONNECTED) 0] ls /
    
  4. 查看当前节点详细数据

    [zk: localhost:2181(CONNECTED) 1] ls2 /
    
  5. 分别创建2个普通节点

    [zk: localhost:2181(CONNECTED) 3] create /kafka ""
    [zk: localhost:2181(CONNECTED) 3] create /kafka/config "config"
    
  6. 获得节点的值

    [zk: localhost:2181(CONNECTED) 5] get /kafka
    
  7. 创建临时节点

    [zk: localhost:2181(CONNECTED) 7] create -e /kafka/node1 "192.168.1.1"
    
    1. 在当前客户端能查看到数据

      [zk: localhost:2181(CONNECTED) 3] ls /kafka
      [config,node1]
      
    2. 推出客户端后,重新查看

      [zk: localhost:2181(CONNECTED) 3] quit
      [dwjf321@hadoop103 zookeeper-3.4.10]$ bin/zkCli.sh
      [zk: localhost:2181(CONNECTED) 3] ls /kafka
      [config]
      
  8. 创建有序节点

    1. 先创建一个普通跟节点

      [zk: localhost:2181(CONNECTED) 3] create /dubbo "dubbo"
      "dobbo"
      Created /dubbo
      
    2. 创建有序节点

      [zk: localhost:2181(CONNECTED) 3] create -s /dubbo/node "192.168.1.1"
      Created /dubbo/node0000000000
      [zk: localhost:2181(CONNECTED) 3] create -s /dubbo/node "192.168.1.2"
      Created /dubbo/node0000000001
      [zk: localhost:2181(CONNECTED) 3] create -s /dubbo/node "192.168.1.3"
      Created /dubbo/node0000000002
      

      如果原来没有序号节点,序号从0开始依次递增。如果原节点下已有2个节点,则再排序时从2开始,以此类推。

  9. 修改节点数据值

    [zk: localhost:2181(CONNECTED) 3] set /kafka/node1 "192.168.1.2"
    
  10. 节点的值变化监听

    1. 在hadoop104主机上注册监听 /kafka/node1节点数据变化

      [zk: localhost:2181(CONNECTED) 26] [zk: localhost:2181(CONNECTED) 8] get /kafka/node1 watch
      
    2. 在hadoop103主机上修改 /kafka/node1节点的数据

      [zk: localhost:2181(CONNECTED) 3] set /kafka/node1 "192.168.1.3"
      
    3. 观察hadoop104主机收到数据变化的监听

      WATCHER::
      WatchedEvent state:SyncConnected type:NodeDataChanged path:/kafka/node1
      
  11. 节点的子节点变化监听(路径变化)

    1. 在 hadoop104 主机上注册监听 /kafka节点的子节点变化

      [zk: localhost:2181(CONNECTED) 1] ls /kafka watch
      [node1]
      
    2. 在 hadoop103 主机 /kafka节点上创建子节点

      [zk: localhost:2181(CONNECTED) 7] create /kafka/node2 "192.168.1.4"
      
    3. 观察 hadoop104 主机收到子节点变化的监听

      WATCHER::
      WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/kafka
      
  12. 删除节点

    [zk: localhost:2181(CONNECTED) 4] delete /kafka/node1
    
  13. 递归删除节点

    [zk: localhost:2181(CONNECTED) 4] rmr /kafka
    
  14. 查看节点状态

    [zk: localhost:2181(CONNECTED) 17] stat /dubbo
    

2. Java 客户端操作 Zookeeper

  1. 添加 pom.xml 依赖

    <dependencies>
    		<dependency>
    			<groupId>junit</groupId>
    			<artifactId>junit</artifactId>
    			<version>RELEASE</version>
    		</dependency>
    		<dependency>
    			<groupId>org.apache.logging.log4j</groupId>
    			<artifactId>log4j-core</artifactId>
    			<version>2.8.2</version>
    		</dependency>
    		<!-- https://mvnrepository.com/artifact/org.apache.zookeeper/zookeeper -->
    		<dependency>
    			<groupId>org.apache.zookeeper</groupId>
    			<artifactId>zookeeper</artifactId>
    			<version>3.4.10</version>
    		</dependency>
    </dependencies>
    
  2. 拷贝 log4j.properties文件到项目根目

    需要在项目的 src/main/resources目录下,新建一个文件,命名为“log4j.properties”,在文件中填入。

    log4j.rootLogger=INFO, stdout  
    log4j.appender.stdout=org.apache.log4j.ConsoleAppender  
    log4j.appender.stdout.layout=org.apache.log4j.PatternLayout  
    log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] - %m%n  
    log4j.appender.logfile=org.apache.log4j.FileAppender  
    log4j.appender.logfile.File=target/spring.log  
    log4j.appender.logfile.layout=org.apache.log4j.PatternLayout  
    log4j.appender.logfile.layout.ConversionPattern=%d %p [%c] - %m%n
    

2.1 创建 ZooKeeper 客户端

private static String connectString = "hadoop102:2181,hadoop103:2181,hadoop104:2181";
private static int sessionTimeout = 2000;
private ZooKeeper zkClient = null;

public void init() throws Exception {
    
    
    
    zkClient = new ZooKeeper(connectString, sessionTimeout, new Watcher() {
    
    
        
        @Override
        public void process(WatchedEvent event) {
    
    
            
            // 收到事件通知后的回调函数(用户的业务逻辑)
            System.out.println(event.getType() + "--" + event.getPath());
            
            // 再次启动监听
            try {
    
    
                zkClient.getChildren("/", true);
            } catch (Exception e) {
    
    
                e.printStackTrace();
            }
        }
    });
}

2.2 创建子节点

public void create() throws Exception {
    
    
		// 参数1:要创建的节点的路径; 参数2:节点数据 ; 参数3:节点权限 ;参数4:节点的类型
		String nodeCreated = zkClient.create("/kafka", "kafka".getBytes(), Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
}

2.3 获取子节点并监听节点变化

public void getChildren() throws Exception {
    
    

		List<String> children = zkClient.getChildren("/", true);

		for (String child : children) {
    
    
			System.out.println(child);
		}

		// 延时阻塞
		Thread.sleep(Long.MAX_VALUE);
}

2.4 判断Znode是否存在

public void exist() throws Exception {
    
    

	Stat stat = zkClient.exists("/eclipse", false);

	System.out.println(stat == null ? "not exist" : "exist");
}

2.5 分布式锁的实现

2.5.1 分布式锁实现原理

  1. 客户端连接到 Zookeeper,创建锁的跟节点
  2. 在根节点上创建临时顺序节点
  3. 获取根节点下的所有子节点
  4. 判断最小的子节点是否是当前创建的临时顺序节点。
  5. 最小的节点是当前节点则加锁成功,否则进入等待。
  6. 获取当前节点的前一个节点,判断前一个节点是否存在,并监听这个节点。
  7. 当前线程进入等待。
  8. 监听到前一个节点消失,当前线程被释放,获取到锁。
  9. 运行完之后,删除当前线程持有的临时节点,是否锁。

2.5.2 分布式锁代码实现

package com.zk.lock;

import java.io.IOException;
import java.util.List;
import java.util.SortedSet;
import java.util.TreeSet;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.Lock;

import org.apache.zookeeper.CreateMode;
import org.apache.zookeeper.KeeperException;
import org.apache.zookeeper.WatchedEvent;
import org.apache.zookeeper.Watcher;
import org.apache.zookeeper.ZooDefs;
import org.apache.zookeeper.ZooKeeper;
import org.apache.zookeeper.Watcher.Event;
import org.apache.zookeeper.data.Stat;

public class DistributeLock implements Lock, Watcher{
    
    

	private ZooKeeper zk = null;
	private String ROOT_LOCK = "/locks";//锁的根节点
	private String WAIT_LOCK;//等待前一个锁
	private String CURRENT_LOCK;//表示当前的锁
	String connectString = "hadoop102:2181,hadoop103:2181,hadoop104:2181";

	private CountDownLatch latch;
	
	public DistributeLock() {
    
    
		Stat stat = null;
		latch = new CountDownLatch(1);
		try {
    
    
			zk = new ZooKeeper(connectString, 4000, this);
			latch.await();
			//判断当前根节点是否存在
			stat = zk.exists(ROOT_LOCK, false);
		} catch (Exception e) {
    
    
			e.printStackTrace();
		}finally {
    
    
			if (stat == null) {
    
    
				try {
    
    
					zk.create(ROOT_LOCK, "0".getBytes(), ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
				} catch (KeeperException | InterruptedException e) {
    
    
					System.out.println(Thread.currentThread().getName()+"创建"+ROOT_LOCK+"失败");
					//e.printStackTrace();
				}
			}
		}
	}

	@Override
	public void process(WatchedEvent event) {
    
    
		if (Event.KeeperState.SyncConnected == event.getState()) {
    
    
			if (latch != null) {
    
    
				latch.countDown();
				latch = null;
			}
		}
		if (latch != null) {
    
    
			latch.countDown();
		}
	}

	@Override
	public void lock() {
    
    
		if (this.tryLock()) {
    
    
			System.out.println(Thread.currentThread().getName()+"->"+"获得锁成功");
			return;
		}
		//没有获取锁,继续等待
		waitForLock(WAIT_LOCK);
		
	}

	private void waitForLock(String prev) {
    
    
		try {
    
    
			Stat stat =zk.exists(prev, true);
			if (stat != null) {
    
    
				System.out.println(Thread.currentThread().getName()+"->等待"+ROOT_LOCK+"/"+prev+"释放");
				latch = new CountDownLatch(1);
				latch.await();
				System.out.println(Thread.currentThread().getName()+"->获得锁成功");
			}
		} catch (KeeperException | InterruptedException e) {
    
    
			e.printStackTrace();
		}
	}

	@Override
	public void lockInterruptibly() throws InterruptedException {
    
    
		// TODO Auto-generated method stub
		
	}

	@Override
	public boolean tryLock() {
    
    
		//创建临时有序节点
		try {
    
    
			CURRENT_LOCK = zk.create(ROOT_LOCK+"/", "0".getBytes(), ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL);
			System.out.println(Thread.currentThread().getName()+"->"+CURRENT_LOCK+"尝试竞争锁");
		
			List<String> childrens = zk.getChildren(ROOT_LOCK, false);
			SortedSet<String> sortedSet = new TreeSet<>();
			for (String children : childrens) {
    
    
				sortedSet.add(ROOT_LOCK+"/"+children);
			}
			//得到当前最小的节点
			String firstNode = sortedSet.first();
			//如果最小的和当前创建的node相等
			if (CURRENT_LOCK.equals(firstNode)) {
    
    
				return true;
			}
			//得到比当前更小的节点,查看还有哪些等待释放
			SortedSet<String> lessThenMe = sortedSet.headSet(CURRENT_LOCK);
			if (!lessThenMe.isEmpty()) {
    
    
				//获取比当前节点更小的最后一个节点,设置给WAIT_LOCK
				WAIT_LOCK = lessThenMe.last();
			}
			return false;
		} catch (KeeperException | InterruptedException e) {
    
    
			e.printStackTrace();
		}
		return false;
	}

	@Override
	public boolean tryLock(long time, TimeUnit unit) throws InterruptedException {
    
    
		// TODO Auto-generated method stub
		return false;
	}

	@Override
	public void unlock() {
    
    

		try {
    
    
			System.out.println(Thread.currentThread().getName()+"->释放锁"+CURRENT_LOCK);
			zk.delete(CURRENT_LOCK, -1);
			CURRENT_LOCK = null;
			zk.close();
		} catch (InterruptedException | KeeperException e) {
    
    
			e.printStackTrace();
		}
	}

	@Override
	public Condition newCondition() {
    
    
		// TODO Auto-generated method stub
		return null;
	}
	
	
	public static void main(String[] args) throws IOException {
    
    
		int count = 10;
		CountDownLatch countDownLatch = new CountDownLatch(count);
		for (int i = 0; i < count; i++) {
    
    
			new Thread(()-> {
    
    
				try {
    
    
					countDownLatch.await();
					DistributeLock lock = new DistributeLock();
					lock.lock();
				} catch (InterruptedException e) {
    
    
					e.printStackTrace();
				}
			},"Thread-"+i).start();
			countDownLatch.countDown();
		}
		
		System.in.read();
		//DistributeLock lock = new DistributeLock();

	}

}

3. Curator 操作 Zookeeper

  1. 添加 pom.xml 依赖

    <dependency>
        <groupId>org.apache.zookeeper</groupId>
        <artifactId>zookeeper</artifactId>
        <version>3.4.13</version>
    </dependency>
    <dependency>
        <groupId>org.apache.curator</groupId>
        <artifactId>curator-framework</artifactId>
        <version>4.0.0</version>
    </dependency>
    <dependency>
        <groupId>org.apache.curator</groupId>
        <artifactId>curator-recipes</artifactId>
        <version>4.0.0</version>
    </dependency>
    <dependency>
        <groupId>org.apache.curator</groupId>
        <artifactId>curator-client</artifactId>
        <version>4.0.0</version>
    </dependency>
    
  2. 代码实现

    package com.zk.curator;
    
    import java.util.concurrent.ExecutorService;
    import java.util.concurrent.Executors;
    
    import org.apache.curator.framework.CuratorFramework;
    import org.apache.curator.framework.CuratorFrameworkFactory;
    import org.apache.curator.framework.recipes.cache.NodeCache;
    import org.apache.curator.framework.recipes.cache.NodeCacheListener;
    import org.apache.curator.framework.recipes.cache.PathChildrenCache;
    import org.apache.curator.framework.recipes.cache.PathChildrenCacheEvent;
    import org.apache.curator.framework.recipes.cache.PathChildrenCacheListener;
    import org.apache.curator.framework.recipes.cache.TreeCache;
    import org.apache.curator.framework.recipes.cache.TreeCacheEvent;
    import org.apache.curator.framework.recipes.cache.TreeCacheListener;
    import org.apache.curator.framework.recipes.locks.InterProcessMutex;
    import org.apache.curator.retry.ExponentialBackoffRetry;
    import org.apache.zookeeper.CreateMode;
    import org.apache.zookeeper.data.Stat;
    
    public class CuratorDemo {
          
          
    	String connectString = "hadoop102:2181,hadoop103:2181,hadoop104:2181";
    	
    	private CuratorFramework framework;
    	
    	public CuratorDemo() {
          
          
    		CuratorFramework framework = CuratorFrameworkFactory
    				.builder()
    				.sessionTimeoutMs(40000)
    				.retryPolicy(new ExponentialBackoffRetry(10000,3))
    				.connectString(connectString)
    				.namespace("curator")
    				.build();
    		framework.start();
    		this.framework = framework;
    	}
    	
    	/**
    	 * 创建node
    	 * @param path
    	 * @param data
    	 */
    	public void create(String path, String data) {
          
          
    		try {
          
          
    			String str = framework.create().creatingParentsIfNeeded()
    			.withMode(CreateMode.EPHEMERAL)
    			.forPath(path,data.getBytes());
    			System.out.println(str);
    		} catch (Exception e) {
          
          
    			e.printStackTrace();
    		}
    	}
    	
    	/**
    	 * 得到node数据
    	 * @param path
    	 * @param stat
    	 * @return
    	 */
    	public String getData(String path, Stat stat) {
          
          
    		byte[] data = null;
    		try {
          
          
    			data = framework.getData().storingStatIn(stat).forPath(path);
    			return new String(data);
    		} catch (Exception e) {
          
          
    			e.printStackTrace();
    		}
    		return null;
    	}
    	
    	/**
    	 * 修改node
    	 * @param path
    	 * @param stat
    	 * @param data
    	 * @return
    	 */
    	public Stat setData(String path, Stat stat, String data) {
          
          
    		try {
          
          
    			return stat = framework.setData().withVersion(stat.getAversion()).forPath(path, data.getBytes());
    		} catch (Exception e) {
          
          
    			e.printStackTrace();
    		}
    		return null;
    	}
    	
    	/**
    	 * 删除node
    	 * @param path
    	 * @return
    	 */
    	public boolean delete(String path) {
          
          
    		//删除
    		try {
          
          
    			framework.delete().deletingChildrenIfNeeded().forPath(path);
    			return true;
    		} catch (Exception e) {
          
          
    			e.printStackTrace();
    		}
    		return false;
    	}
    	
    	/**
    	 * 节点的增删改监听
    	 * @param path
    	 * @throws Exception
    	 */
    	public void watcherNode(String path) throws Exception {
          
          
    		NodeCache nodeCache = new NodeCache(framework, path, false);
    		NodeCacheListener listener = new NodeCacheListener() {
          
          
    			
    			@Override
    			public void nodeChanged() throws Exception {
          
          
    				System.out.println("recevive event:"+nodeCache.getCurrentData().getPath());
    				
    			}
    		};
    		
    		nodeCache.getListenable().addListener(listener);
    		nodeCache.start();
    	}
    	
    	/**
    	 * 自己点的增删改监听
    	 * @param path
    	 * @throws Exception
    	 */
    	public void watcherChildChange(String path) throws Exception {
          
          
    		PathChildrenCache childrenCache = new PathChildrenCache(framework, path, true);
    		PathChildrenCacheListener listener = new PathChildrenCacheListener() {
          
          
    			
    			@Override
    			public void childEvent(CuratorFramework framework, PathChildrenCacheEvent event) throws Exception {
          
          
    				System.out.println("Receive Event:"+event.getType());
    				
    			}
    		};
    		
    		childrenCache.getListenable().addListener(listener);
    		childrenCache.start();
    	}
    	
    	/**
    	 * node任何变化的监听
    	 * @param path
    	 */
    	public void watcherTree(String path) {
          
          
    		TreeCache treeCache = new TreeCache(framework, path);
    		TreeCacheListener listener = new TreeCacheListener() {
          
          
    			
    			@Override
    			public void childEvent(CuratorFramework framework, TreeCacheEvent event) throws Exception {
          
          
    				System.out.println("Receive Event:"+event.getType());				
    			}
    		};
    		treeCache.getListenable().addListener(listener);
    		try {
          
          
    			treeCache.start();
    		} catch (Exception e) {
          
          
    			e.printStackTrace();
    		}
    	}
    	
    	public void close() {
          
          
    		framework.close();
    	}
    	
    	/**
    	* 分布式锁
    	*/
    	public void lock(String path) {
          
          
    		InterProcessMutex mutex = new InterProcessMutex(framework, path);
    		try {
          
          
    			//加锁
    			mutex.acquire();
    			System.out.println("线程【"+Thread.currentThread().getName()+"】获取到锁");
    			Thread.sleep(1000*60);
    		} catch (Exception e) {
          
          
    			e.printStackTrace();
    		}finally {
          
          
    			try {
          
          
    				//释放锁
    				mutex.release();
    			} catch (Exception e) {
          
          
    				e.printStackTrace();
    			}
    		}
    	}
    }
    
    

猜你喜欢

转载自blog.csdn.net/dwjf321/article/details/110288939