Building a Zookeeper cluster on CentOS7
Environmental preparation
First, you need to prepare three zookeepers (the installation process of zookeeper will be discussed later). You can install one and clone it. The following three ports need to be opened at the same time
- 2181: The port through which the client connects to zookeeper
- 2888: Communication port within the cluster
- 3888: Used in leader election
You can choose to open the port or close the firewall. Open port command:
firewall-cmd --zone=public --add-port=2181/tcp --permanent
firewall-cmd --zone=public --add-port=2888/tcp --permanent
firewall-cmd --zone=public --add-port=3888/tcp --permanent
firewall-cmd --reload
To permanently turn off the firewall command:
systemctl disable --now firewalld
Install jdk
As shown in the figure, zookeeper requires Java version 1.8 or above (picture taken from zookeeper official website):
I have written an article before about installing jdk8. You can follow that article directly:https://blog.csdn.net/m0_51510236/article/details/ 113739345
Install zookeeper
Download zookeeper
On the official website we can see that the latest stable version is 3.8.2
:
Let’s download this version, download command:
wget https://dlcdn.apache.org/zookeeper/zookeeper-3.8.2/apache-zookeeper-3.8.2-bin.tar.gz
After downloading, it looks like this:
Unzip zookeeper
Because the downloaded package is a compressed package, you can directly unzip it and the installation will be successful. I plan to install zookeeper in the /opt/server
directory, so execute the following two lines of commands:
mkdir -p /opt/server
tar -zxvf apache-zookeeper-3.8.2-bin.tar.gz -C /opt/server
After decompression, it looks like this:
Modify zookeeper configuration file
We execute the following two lines of commands to create this configuration file:
cd apache-zookeeper-3.8.2-bin/conf/
cp zoo_sample.cfg zoo.cfg
The execution is as shown in the figure:
Then we modify it:
vim zoo.cfg
Optional modifications (you can also keep the default):
Next we can try to start the stand-alone version of zookeeper. Go to the root directory of zookeeper and execute this command:
./bin/zkServer.sh start
After startup you can view:
Let’s execute this command again to check the zookeeper status:
./bin/zkServer.sh status
You can see that it is started on a single machine:
Build a zookeeper cluster
Above we explained how to install zookeeper, next we build a zookeeper cluster. We need three zookeepers. You can choose to clone or install three directly. Let me take a look at my IP address configuration:
IP address | use |
---|---|
192.168.1.181 | The first zookeeper |
192.168.1.182 | The second zookeeper |
192.168.1.183 | The third zookeeper |
I have directly cloned these three machines here, and they are all connected:
Modify the zoo.cfg file
Let’s modify the above IP address zoo.cfg
File:
cd /opt/server/apache-zookeeper-3.8.2-bin
vim conf/zoo.cfg
Add the following configuration to all three servers (be careful to modify the IP address):
server.1=192.168.1.181:2888:3888
server.2=192.168.1.182:2888:3888
server.3=192.168.1.183:2888:3888
As shown in the figure after adding:
Add myid file
We need to add a new myid file under zookeeperdataDir
(default is under /tmp/zookeeper, if modified, please change the location of the myid file). The value of this file depends on The value you configured in the zoo.cfg file in the previous step:
The three servers execute the following line of code respectively (if you modify the default data directory of zookeeper, please pay attention to modifying the file location of myid):
- 192.168.1.181
echo 1 > /tmp/zookeeper/myid
- 192.168.1.182
echo 2 > /tmp/zookeeper/myid
- 192.168.1.183
echo 3 > /tmp/zookeeper/myid
As shown in the figure after execution:
Start the zookeeper cluster
The startup command is the same. We execute the following line of commands on all three servers:
# 来到安装zookeeper的目录
cd /opt/server/apache-zookeeper-3.8.2-bin
# 启动zookeeper
./bin/zkServer.sh start
Then we execute the following line of commands to view the status of the zookeeper cluster:
./bin/zkServer.sh status
You can see a zookeeper cluster with one master and two slaves:
Follow me. The next article will be about how to use zookeeper to implement distributed lock functions in SpringBoot.