mac pro m1: Build a zookeeper cluster and set it to start automatically

0 Preface

We have explained how to build a single zookeeper node before , but in actual production, in order to ensure high service availability, we usually use cluster mode. So this time we will practice the construction of the cluster mode

1. zk cluster mode

zk can be used as a registration center and configuration center, and is often used in multi-node service governance scenarios of various components of microservices. The single-node zk is prone to the problem of no backup node when a failure occurs.

The cluster mode of zk is a master-slave mode. The master node is responsible for writing data, the slave node is responsible for reading data, and the slave node data is synchronized from the master node. Data communication is carried out between each node through port 2888.

Since the cluster role
is a master-slave mode, its roles naturally have two roles: master (Leader) and slave (Follower). In addition, there is an observer role

Role illustrate
Leader master node Provide read and write services for clients, and be responsible for electing the main vote
Follower slave node Provide read services for clients and participate in voting for the master
Observer Observer Provide read services for clients, do not participate in the election of the main vote

2. Build

0. Because zookeeper is developed based on java, it is necessary to install the java environment first, which has been explained before, and will not be explained separately here

1. Download the zookeeper installation package: zookeeper installation package download address

Here I choose 3.8.0the version

insert image description here

2. Unzip the installation package, here I upload the zk compressed package to the virtual machine /datadirectory

cd /data
tar -zxvf apache-zookeeper-3.8.0-bin.tar.gz

3. In the zk installation directory, create tmpa directory and create myida file, declare the cluster node id, the first node, we declare the content of the file as1

cd /data/apache-zookeeper-3.8.0-bin
mkdir tmp
vim tmp/myid
# 文本内容
1

4. Modify the zk configuration file and rename it zoo_sample.cfgto zoo.cfg make it effective

cp conf/zoo_sample.cfg conf/zoo.cfg

5. Modify the content in the configuration file

vim conf/zoo.cfg

The content is:

# 修改数据目录为刚刚创建的tmp目录
dataDir=/data/apache-zookeeper-3.8.0-bin/tmp
# 添加集群节点,其中2888是节点通信端口,3888是节点选主端口
server.1=192.168.244.42:2888:3888
server.2=192.168.244.43:2888:3888
server.3=192.168.244.44:2888:3888

insert image description here

6. Copy the zookeeper installation directory file to the other 2 zk nodes

scp -r /data/apache-zookeeper-3.8.0-bin [email protected]:/data/
scp -r /data/apache-zookeeper-3.8.0-bin [email protected]:/data/

7. Modify myidthe content of the other two nodes, respectively 2,3, pay attention to keep the content consistent with the above step 5

8. Open zk-related ports. If the firewall is not open, this step can be omitted.

firewall-cmd --add-port=2181/tcp --permanent
firewall-cmd --add-port=8080/tcp --permanent
firewall-cmd --add-port=2888/tcp --permanent
firewall-cmd --add-port=3888/tcp --permanent
firewall-cmd --reload
# 查询开放端口
netstat -anp

9. Start zk with 3 nodes

/data/apache-zookeeper-3.8.0-bin/bin/zkServer.sh start

insert image description here

10. Check the node cluster status

 /data/apache-zookeeper-3.8.0-bin/bin/zkServer.sh status

insert image description here
insert image description here
insert image description here

It can be seen that the zk cluster starts successfully, and it is found that there are 2 follower nodes and 1 leader node by querying the status

So in fact, the cluster mode of zk is the master-slave mode, and here we have 1 master and 2 slaves

2. Set the boot to start automatically

1. Write startup script

cd /etc/init.d
vim zookeeper

content:

#!bin/bash
#chkconfig:2345 54 26
#processname:zookeeper
#description:zk server
prog=/data/apache-zookeeper-3.8.0-bin/bin/zkServer.sh
start(){                                
        $prog start 
        echo "zookeeper启动"
}
stop(){                                
        $prog stop 
        echo "zookeeper关闭"
}
status(){
        $prog status
}
restart(){              
        stop
        start
}
case "$1" in        
"start")
        start      
        ;;
"stop")            
        stop
        ;;
"status")
        status
        ;;
"restart")            
        restart
        ;;
*)      
        echo "支持指令:$0 start|stop|restart|status"
        ;;
esac

in

chkconfig: 2345 54 26 is used to set the running level, startup priority and shutdown priority when booting automatically

2. Empower the script

chmod +x /etc/init.d/zookeeper

3. You also need to configure the JAVA path, otherwise the execution will report an errorError: JAVA_HOME is not set and java could not be found in PATH

In the bin directory of the zk installation directory, modify zkEnv.shand add java path description

vim /data/apache-zookeeper-3.8.0-bin/bin/zkEnv.sh
# 内容
JAVA_HOME="/var/local/zulu8.58.0.13-ca-jdk8.0.312-linux_aarch64"

insert image description here
4. Execute the script and verify it

service zookeeper status
service zookeeper stop
service zookeeper start

insert image description here

5. Add to boot list

# 添加开机自启
chkconfig --add zookeeper
# 状态设置为启动
chkconfig zookeeper on

6. Restart the virtual machine, check the status of zookeeper, and find that it is automatically started

insert image description here

7. Perform the same self-starting settings on the other two nodes

As above, our zk cluster installation is complete!

Guess you like

Origin blog.csdn.net/qq_24950043/article/details/131502141