Zookeeper (four) actual combat
Distributed installation and deployment
Cluster idea: Get one server first, then clone two to form a cluster!
Configure server number
Create a myid file in /opt/zookeeper/zkData
vim myid
- Add the number corresponding to the server in the file: 1
- The remaining two servers correspond to 2 and 3 respectively
Configure zoo.cfg file
Open the zoo.cfg file and add the following configuration
#######################cluster##########################
server.1=106.75.245.83:2888:3888
server.2=106.75.245.84:2888:3888
server.3=106.75.245.85:2888:3888
- Interpretation of configuration parameters server.A=B:C:D
- A: A number, indicating the server number
. The data in the /opt/zookeeper/zkData/myid file configured in cluster mode is the value of A - B: the ip address of the server
- C: The port for exchanging information with the Leader server in the cluster
- D: Dedicated port during election. In case the leader server in the cluster hangs up, a port is needed to re-election and select a new leader. This port is used to communicate with each other during election.
- A: A number, indicating the server number
Configure the remaining two servers
- Create zk02 in the virtual machine data directory vms
- Copy the .vmx files and all the .vmdk files in the server data directory to zk02 respectively
- Virtual machine -> file -> open (select the .vmx file under zk02)
- Start this virtual machine, a dialog box pops up, select "I have copied this virtual machine"
- After entering the system, modify the ip in linux and modify the value in /opt/zookeeper/zkData/myid to 2
The third server zk03, repeat the above steps
Cluster operation
The firewall of each server must be closed
systemctl stop firewalld.service
Start the first station
./zkServer.sh start
Check status
./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Error contacting service. It is probably not running.
Note: Because there are no more than half of the servers, the cluster fails (the failure of the firewall will also lead to failure)
When starting the second server
- View the status of the first station: Mode: follower
- View the status of the second station: Mode: leader
Client command line operation
Start the client
./zkCli.sh
Show all operation commands
help
View the content contained in the current znode
ls /
View detailed data of the current node
The old version of zookeeper uses ls2/, which has now been replaced by new commands
ls -s /
-
cZxid: The transaction that created the node
- Every time the ZooKeeper state is modified, a timestamp in the form of zxid will be received, which is the ZooKeeper transaction ID.
- The transaction ID is the total sequence of all modifications in ZooKeeper.
- Each modification has a unique zxid, if zxid1 is less than zxid2, then zxid1 occurs before zxid2.
-
ctime: the number of milliseconds created (since 1970)
-
mZxid: last updated transaction zxid
-
mtime: The number of milliseconds last modified (since 1970)
-
pZxid: the last updated child node zxid
-
cversion: create version number, the number of times the child node has been modified
-
dataVersion: data change version number
-
aclVersion: permission version number
-
ephemeralOwner: If it is a temporary node, this is the session id of the znode owner. If it is not a temporary node, it is 0.
-
dataLength: data length
-
numChildren: number of child nodes
Create 2 ordinary nodes separately
-
In the root directory, create two nodes for China and the United States
create /china create /usa
-
In the root directory, create a Russian node and save "Putin" data to the node
create /ru "pujing"
-
Multi-level creation of nodes
-
Under Japan, create Tokyo "hot"
-
japan must be created in advance, otherwise an error "node does not exist" is reported
create /japan create /japan/Tokyo "hot"
-
-
Get the value of the node
get /japan/Tokyo
-
Create a short node: After the creation is successful, quit exits the client, reconnects, and the short node disappears
create -e /uk ls / quit ls /
-
Create numbered nodes
-
Create 3 cities under Russian ru
create -s /ru/city # 执行三次 ls /ru [city0000000000, city0000000001, city0000000002]
If there is no serial number node, the serial number starts from 0 and increases.
If there are 2 nodes under the original node, start from 2 when reordering, and so on
-
-
Modify node data value
set /japan/Tokyo "too hot"
-
Monitor node value changes or child node changes (path changes)
-
Register on the server3 host to monitor the data changes of the /usa node
addWatch /usa
-
Modify the data of /usa on the Server1 host
set /usa "telangpu"
-
Server3 will respond immediately
WatchedEvent state:SyncConnected type:NodeDataChanged path:/usa
-
If you create a child node NewYork under /usa of Server1
create /usa/NewYork
-
Server3 will respond immediately
WatchedEvent state:SyncConnected type:NodeCreatedpath:/usa/NewYork
-
-
Delete node
delete /usa/NewYork
-
Recursively delete nodes (not empty nodes, there are child nodes under the node)
deleteall /ru
Not only delete /ru, but also delete all child nodes under /ru
API application
<dependencies>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>2.8.2</version>
</dependency>
<dependency>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
<version>3.6.0</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
</dependency>
</dependencies>
log4j.rootLogger=INFO, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] - %m%n
log4j.appender.logfile=org.apache.log4j.FileAppender
log4j.appender.logfile.File=target/zk.log
log4j.appender.logfile.layout=org.apache.log4j.PatternLayout
log4j.appender.logfile.layout.ConversionPattern=%d %p [%c] - %m%n
Create ZooKeeper client
/**
* zookeeperAPI
*/
public class TestZK {
//zookeeper集群的IP和端口
private String connectString = "106.75.245.83:2181,06.75.245.83:2182,06.75.245.83:2183";
//session超时时间,毫秒单位,此处为60秒
//时间不宜设置太小,因为zookeeper和加载集群环境会因为性能等原因而延迟略高
//如果时间太少,还没有创建好客户端,就开始创建节点,会报错
private int sesionTimeout = 60 * 1000;
//zookeeper客户端对象
private ZooKeeper zooKeeperClient;
/**
* 创建客户端
*/
@Test
public void init() throws IOException {
/**
* new Watcher():监听器
*/
zooKeeperClient = new ZooKeeper(connectString, sesionTimeout, new Watcher() {
public void process(WatchedEvent watchedEvent) {
System.out.println("得到监听反馈,在进行业务处理代码。");
}
});
}
}
Create node
- An ACL object is an Id and permission pair
- After passing the authentication (How), the Id (Who) that indicates which range/scope is allowed to perform those operations (What): Who How What;
- Permission (What) is a bit code represented by an int, and each bit represents the permission state of a corresponding operation.
- Similar to the file permissions of Linux, the difference is that there are 5 operations: CREATE, READ, WRITE, DELETE, ADMIN (corresponding to the permission to change ACL)
- OPEN_ACL_UNSAFE: Create an open node, allowing arbitrary operations (the least used, the rest of the permissions are rarely used)
- READ_ACL_UNSAFE: Create a read-only node
- CREATOR_ALL_ACL: Only the creator has all permissions
/**
* zookeeperAPI
*/
public class TestZK {
//zookeeper集群的IP和端口
private String connectString = "106.75.245.83:2181,106.75.245.83:2181,106.75.245.83:2181";
//session超时时间,毫秒单位,此处为60秒
//时间不宜设置太小,因为zookeeper和加载集群环境会因为性能等原因而延迟略高
//如果时间太少,还没有创建好客户端,就开始创建节点,会报错
private int sesionTimeout = 60 * 1000;
//zookeeper客户端对象
private ZooKeeper zooKeeperClient;
/**
* 创建客户端
*/
@Before
public void init() throws IOException {
/**
* new Watcher():监听器
*/
zooKeeperClient = new ZooKeeper(connectString, sesionTimeout, new Watcher() {
public void process(WatchedEvent watchedEvent) {
System.out.println("得到监听反馈,在进行业务处理代码。");
System.out.println(watchedEvent.getType());
}
});
}
/**
* 创建节点
*/
@Test
public void creaeNode() throws KeeperException, InterruptedException {
/**
* 参数1:要创建的节点的路径
* 参数2:节点数据
* 参数3:节点权限
* 参数4:节点类型
*/
String string = zooKeeperClient.create("/szx", "xiaoxing".getBytes(), ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
System.out.println(string);
}
/**
* 获取节点上的值
*/
@Test
public void getNodeData() throws KeeperException, InterruptedException {
byte[] data = zooKeeperClient.getData("/szx", false, new Stat());
String string = new String(data);
System.out.println(string);
}
/**
* 修改节点的值
*/
@Test
public void updateData() throws KeeperException, InterruptedException {
Stat stat = zooKeeperClient.setData("/szx", "xiaoxing111".getBytes(), 0);
System.out.println(stat);
}
/**
* 删除节点
*/
@Test
public void delete() throws KeeperException, InterruptedException {
zooKeeperClient.delete("/szx",1);
}
/**
* 获取子节点
*/
@Test
public void getChildren() throws KeeperException, InterruptedException {
List<String> children = zooKeeperClient.getChildren("/china", false);
children.forEach(item -> System.out.println(item));
}
/**
* 监听根节点下面的变化
* 程序在运行的过程中,我们在linux下创建一个节点
* IDEA的控制台就会做出响应:NodeChildrenChanged--/
*/
@Test
public void watchNode() throws KeeperException, InterruptedException, IOException {
List<String> children = zooKeeperClient.getChildren("/", true);
children.forEach(item -> System.out.println(item));
//让线程无限等待
System.in.read();
}
/**
* 判断节点是否存在
*/
@Test
public void exists() throws KeeperException, InterruptedException {
Stat szx = zooKeeperClient.exists("/szx", false);
if (szx == null){
System.out.println("节点不存在");
} else {
System.out.println("节点存在");
}
}
}