ZooKeeper super detailed steps to build a cluster

Build a Zookeeper cluster

1.1 Construction requirements

The real cluster needs to be deployed on different servers, but when we test, we start a lot of virtual machines at the same time. The memory will be overwhelming, so we usually build a pseudo cluster , which means that all services are built on one virtual machine. , Distinguish by port.

We here require to build a three-node Zookeeper cluster (pseudo cluster).

1.2 Preparation

Redeploy a virtual machine as a test server for our cluster.

(1) Install JDK [This step is omitted]

(2) Upload the Zookeeper compressed package to the server

(3) Decompress Zookeeper, create a /usr/local/zookeeper-cluster directory, and copy the decompressed Zookeeper to the following three directories

/usr/local/zookeeper-cluster/zookeeper-1

/usr/local/zookeeper-cluster/zookeeper-2

/usr/local/zookeeper-cluster/zookeeper-3

[root@localhost ~]# mkdir /usr/local/zookeeper-cluster
[root@localhost ~]# cp -r  apache-zookeeper-3.5.6-bin /usr/local/zookeeper-cluster/zookeeper-1
[root@localhost ~]# cp -r  apache-zookeeper-3.5.6-bin /usr/local/zookeeper-cluster/zookeeper-2
[root@localhost ~]# cp -r  apache-zookeeper-3.5.6-bin /usr/local/zookeeper-cluster/zookeeper-3

(4) Create the data directory and rename the zoo_sample.cfg file under conf to zoo.cfg

mkdir /usr/local/zookeeper-cluster/zookeeper-1/data
mkdir /usr/local/zookeeper-cluster/zookeeper-2/data
mkdir /usr/local/zookeeper-cluster/zookeeper-3/data

mv  /usr/local/zookeeper-cluster/zookeeper-1/conf/zoo_sample.cfg  /usr/local/zookeeper-cluster/zookeeper-1/conf/zoo.cfg
mv  /usr/local/zookeeper-cluster/zookeeper-2/conf/zoo_sample.cfg  /usr/local/zookeeper-cluster/zookeeper-2/conf/zoo.cfg
mv  /usr/local/zookeeper-cluster/zookeeper-3/conf/zoo_sample.cfg  /usr/local/zookeeper-cluster/zookeeper-3/conf/zoo.cfg

(5) Configure dataDir and clientPort of each Zookeeper as 2181 2182 2183 respectively

Modify /usr/local/zookeeper-cluster/zookeeper-1/conf/zoo.cfg

vim /usr/local/zookeeper-cluster/zookeeper-1/conf/zoo.cfg

clientPort=2181
dataDir=/usr/local/zookeeper-cluster/zookeeper-1/data

Modify /usr/local/zookeeper-cluster/zookeeper-2/conf/zoo.cfg

vim /usr/local/zookeeper-cluster/zookeeper-2/conf/zoo.cfg

clientPort=2182
dataDir=/usr/local/zookeeper-cluster/zookeeper-2/data

Modify /usr/local/zookeeper-cluster/zookeeper-3/conf/zoo.cfg

vim /usr/local/zookeeper-cluster/zookeeper-3/conf/zoo.cfg

clientPort=2183
dataDir=/usr/local/zookeeper-cluster/zookeeper-3/data

1.3 Configure the cluster

(1) Create a myid file in the data directory of each zookeeper with contents 1, 2, and 3. This file is to record the ID of each server

echo 1 >/usr/local/zookeeper-cluster/zookeeper-1/data/myid
echo 2 >/usr/local/zookeeper-cluster/zookeeper-2/data/myid
echo 3 >/usr/local/zookeeper-cluster/zookeeper-3/data/myid

(2) Configure the client access port (clientPort) and cluster server IP list in zoo.cfg of each zookeeper

The cluster server IP list is as follows

vim /usr/local/zookeeper-cluster/zookeeper-1/conf/zoo.cfg
vim /usr/local/zookeeper-cluster/zookeeper-2/conf/zoo.cfg
vim /usr/local/zookeeper-cluster/zookeeper-3/conf/zoo.cfg

server.1=192.168.149.135:2881:3881
server.2=192.168.149.135:2882:3882
server.3=192.168.149.135:2883:3883

Explanation: server. server ID = server IP address: communication port between servers: voting port between servers

Default port for client and server communication: 2181

Default port for communication between servers: 2881

The default port for voting between servers: 3881

The AdminService service in ZooKeeper occupies port 8080 by default

Election is based on the size of the server ID number here

1.4 Start the cluster

To start a cluster is to start each instance separately.

/usr/local/zookeeper-cluster/zookeeper-1/bin/zkServer.sh start
/usr/local/zookeeper-cluster/zookeeper-2/bin/zkServer.sh start
/usr/local/zookeeper-cluster/zookeeper-3/bin/zkServer.sh start

Insert picture description here

After startup, we check the running status of each instance

/usr/local/zookeeper-cluster/zookeeper-1/bin/zkServer.sh status
/usr/local/zookeeper-cluster/zookeeper-2/bin/zkServer.sh status
/usr/local/zookeeper-cluster/zookeeper-3/bin/zkServer.sh status

First query the first service, Mode is follower means follower (from)

Insert picture description here

Then query the second service, Mode is leader, which means it is the leader (master)

**Reason: **There are two servers that can run. It is satisfied that the number of runnable servers is greater than half of the total number of servers in the cluster, and since the ID of the 2nd server is the largest at this time, the 1st voted for the 2nd and the 2nd Vote for yourself, so No. 2 becomes Leader

Insert picture description here

The third query is the follower (from)

Insert picture description here

Reason: The newly added server will not affect the existing Leader

1.5 Simulate cluster exception

(1) First of all, let’s test what happens if the slave server hangs up. Stop server 3, observe No. 1 and No. 2, and find that the status has not changed.

/usr/local/zookeeper-cluster/zookeeper-3/bin/zkServer.sh stop

/usr/local/zookeeper-cluster/zookeeper-1/bin/zkServer.sh status
/usr/local/zookeeper-cluster/zookeeper-2/bin/zkServer.sh status

Insert picture description here

It is concluded that the cluster of 3 nodes, the slave server is down, and the cluster is normal

(2) Let's stop No. 1 server (slave server) again, check the status of No. 2 (master server), and find that it has stopped running.

/usr/local/zookeeper-cluster/zookeeper-1/bin/zkServer.sh stop

/usr/local/zookeeper-cluster/zookeeper-2/bin/zkServer.sh status

Insert picture description here

From this, it is concluded that in a 3-node cluster, 2 slave servers are down, and the master server cannot run. Because the number of runnable machines does not exceed half of the total number of clusters.

(3) We started the No. 1 server again, and found that the No. 2 server started to work normally again. And still the leader.

/usr/local/zookeeper-cluster/zookeeper-1/bin/zkServer.sh start

/usr/local/zookeeper-cluster/zookeeper-2/bin/zkServer.sh status

Insert picture description here

(4) We also start the No. 3 server, stop the No. 2 server, and observe the status of No. 1 and No. 3 after stopping.

/usr/local/zookeeper-cluster/zookeeper-3/bin/zkServer.sh start
/usr/local/zookeeper-cluster/zookeeper-2/bin/zkServer.sh stop

/usr/local/zookeeper-cluster/zookeeper-1/bin/zkServer.sh status
/usr/local/zookeeper-cluster/zookeeper-3/bin/zkServer.sh status

Insert picture description here

Discovered that server 3 became the new leader

From this we draw the conclusion that when the main server in the cluster goes down, the other servers in the cluster will automatically perform the election state, and then generate a new leader

(5) After restarting the No. 2 server, will the No. 2 server become the new leader again?

/usr/local/zookeeper-cluster/zookeeper-2/bin/zkServer.sh start

/usr/local/zookeeper-cluster/zookeeper-2/bin/zkServer.sh status
/usr/local/zookeeper-cluster/zookeeper-3/bin/zkServer.sh status

Insert picture description here

We will find that server 2 is still a follower (slave server) after it is started, server 3 is still the leader (master server), and it has not shaken the leadership of server 3.

From this we have concluded that when the leader is produced, a new server is added to the cluster again, and the current leader will not be affected.

Guess you like

Origin blog.csdn.net/weixin_49343190/article/details/112967154