Centos7 build Zookeeper cluster & error resolution

1. Preparation

  • Install JDK1.8, you can first check whether your system has installed jdk. My own system has been installed, you can refer to CentOS7 to install JDK1.8
java -version
  • Download zookeeper
  1. At the beginning, I passed the wgetcommand zookeeper -3.5.8 . After the entire installation was completed, I started the zookeeper service and found that all nodes could not be started. The error is as follows:
ZooKeeper JMX enabled by default
Using config: /usr/local/apache-zookeeper-3.5.6/bin/../conf/zoo1.cfg
Starting zookeeper ... FAILED TO START
  1. Reference blog: Zookeeper always reports: Starting zookeeper… FAILED TO START
  2. Re-download the lower version of zookeeper, choose to use zookeeper-3.4.14.tar.gz in the Tsinghua mirror

2. Install zookeeper (on one node)

  • Unzip the installation package and move it to the installation directory (I generally do all the operations under the root user)
tar -zxvf zookeeper-3.4.14.tar.gz # 解压缩文件
cp -rf zookeeper-3.4.14 /usr/local # 复制到安装目录
chmod -R -777 /usr/local/zookeeper-3.4.14 # 修改权限
  • Will be simple_zoo.cfgaszoo.cfg
cd /usr/local/zookeeper-3.4.14/conf # 进入conf目录
cp simple_zoo.cfg zoo.cfg
  • Add data folder asdataDir
mkdir /usr/local/zookeeper-3.4.14/data
  • My entire zoo.cfgconfiguration is as follows
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
# 不提供dataLogDir,默认将日志文件写入dataDir
dataDir=/usr/local/zookeeper-3.4.14/data
# the port at which the clients will connect,默认为2181
clientPort=2330
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

# server.id=ip:port1:port2,id是对服务器的编号
# ip是服务器的IP地址;port1是leader和follower的通信端口,默认为2888;
# port2是选举leader的端口,默认为3888
server.1=10.101.30.63:2888:3888
server.2=10.101.39.48:2888:3888
server.3=10.101.29.157:2888:3888
  • Zookeeper also has a default port: 2181, the client's access port.
  • Configure environment variables for zookeeper, personally like to edit /etc/profileand~/.bashrc
export ZOOKEEPER_HOME=/usr/local/zookeeper-3.4.14
export PATH=$PATH:$ZOOKEEPER_HOME/bin
  • sourceMake environment variables take effect through commands
source /etc/profile
source ~/.bashrc

3. Cluster configuration

  • scpCopy the configured zookeeper-3.4.14to other nodes through commands
# 使用两次scp,ip分别为:10.101.39.48、10.101.29.157
scp -r zookeeper-3.4.14 username@IP:/usr/local 
  • Increased myidfile, add the id to which this node will query zookeeper start zoo.cfgto realize myidthe serve idassociation.
# 10.101.30.63的节点,添加myid 为 1
echo 1 > /usr/local/zookeeper-3.4.14/data/myid
# 10.101.39.48的节点,添加myid 为2
echo 2 > /usr/local/zookeeper-3.4.14/data/myid
# 10.101.29.157的节点,添加myid 为3
echo 3 > /usr/local/zookeeper-3.4.14/data/myid
  • Start zookeeper on each node
. zkServer.sh start

Insert picture description here

  • Check the status of zookeeper. From the screenshot, you can see that there are three nodes and one leader. Two followers.
 . zkServer.sh status

Insert picture description here
Insert picture description here
Insert picture description here
Installation reference:
CentOS7 installation configuration zookeeper cluster
CentOS installation ZooKeeper cluster

4. Check status, promptError contacting service. It is probably not running.

  • After the startup is successful, use to . zkServer.sh statusview the status of each node and find that the unified error is as follows.
JMX enabled by default
Using config: /usr/local/zk/bin/../conf/zoo.cfg
Error contacting service. It is probably not running.
  • The zoo.cfgspecified in is dataDirinconsistent with the corresponding directory created by yourself. —— I changed it by myself, no response after restart
  • Checked many other details:
  1. dataDirmyidWhether the id written in the following middle zoo.cfgis consistent with the configuration in middle. —— consistent
  2. zoo.cfgWhether the server IP written in is correct. —— correct
  • Check zoo.cfgthe clientPortport is occupied. —— indeed occupied
  1. I first configured 2181this port value, and found that there is indeed a process occupied
lsof -i:2181
  1. Stupid, this port is occupied, then I changed it to another port ( 2330) in the configuration file , and did not think about going to the killprocess first .
  2. After reboot, you will still find an error, then kill off every machine 2181and 2330process ports.
  3. Restart again and find that status can get the status.
  • tips: I status can get the status, and I found that there is a heavy evidence that dataDirthere is a zookeeper_server.pidfile below .
    Insert picture description here

5. When using zkCli,127.0.0.1:2181: Connection refused

  • Use the following command to connect to the zk client
. zkCli.sh
  • It is found that an error is reported after entering:
2020-08-17 15:14:34,655 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1025] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2020-08-17 15:14:34,656 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1162] - Socket error occurred: localhost/127.0.0.1:2181: Connection refused
  • The amazing thing is that all the clientPortconfiguration is 2330, why the visitor still visits 2181this default port?
  • After checking the information, I found that others used specific parameters to access clientPort.
. zkCli.sh -server localhost:2181
  • After restarting, I also realized the access to the node by specifying the ip and port. Access ok, ip and port are also specified by yourself.
. zkCli.sh -server 10.101.30.63:2330

Insert picture description here

Guess you like

Origin blog.csdn.net/u014454538/article/details/108050520