[Series] 1.ZooKeeper ZooKeeper stand-alone version, pseudo-cluster environment to build

ZooKeeper installation mode mainly three kinds:

  1. Stand-alone version (Standalone Mode) Mode: Only a ZooKeeper service

  2. Pseudo cluster modes: single multiple ZooKeeper service

  3. Cluster-Mode: Multi-machine ZooKeeper service

1 stand-alone version (Standalone mode) installation

ZooKeeper official website to download the address: http: //zookeeper.apache.org/releases.html#download

Operation as shown:

Insert picture described here
note that, if you do not want to mice, make sure the next stable version (stable Release) , may be a variety of unknown exception when non-stable version installed.
Here Insert Picture description
to 3.4.14 version, for example, at the Centos系统 time of installation, the installation guide before writing some software, it was a message that want to step in as much detail of the installation, including the installation path should take, so the tutorial can be shining the copy operation. This requires a little, huh, meet you!

 

1.1 Download the installation package

Enter the following command:

wget https://archive.apache.org/dist/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz

As shown below:

Here Insert Picture Description

1.2 extract the installation package

tar -zxvf apache-zookeeper-3.4.14.tar.gz

After completion of decompression, the decompressed packet moves into / usr:

 mv apache-zookeeper-3.4.14 /usr/

And apache-zookeeper-3.4.14 rename zookeeper-3.4.14.

At this point you can see ZooKeeper directory structure is as follows:

[root@instance-e5cf5719 zookeeper-3.4.14]# ls
bin        data             ivy.xml      logs        README.md             zookeeper-3.4.14.jar      zookeeper-3.4.14.jar.sha1  zookeeper-docs  zookeeper-recipes
build.xml  dist-maven       lib          NOTICE.txt  README_packaging.txt  zookeeper-3.4.14.jar.asc  zookeeper-client           zookeeper-it    zookeeper-server
conf       ivysettings.xml  LICENSE.txt  pom.xml     src                   zookeeper-3.4.14.jar.md5  zookeeper-contrib          zookeeper-jute
  • bin directory --zk executable scripts directory, including service process zk, zk clients, and other scripts. Which, .sh script under Linux environment, .cmd script in a Windows environment.

  • conf directory - profile directory. zoo_sample.cfg for the sample configuration file, you need to modify a name for themselves, usually zoo.cfg. log4j.properties configuration file for the log.

1.3 Setting zoo.cfg

/Usr/zookeeper-3.4.14/conf into the directory, you can see zoo_sample.cfg, this is the sample configuration file, you need to modify for their own, the general command zoo.cfg.

cp zoo_sample.cfg zoo.cfg

Zoo.cfg you can see the contents of the file:

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

Looked good mixed feelings, in fact, after removing the comment, only a few lines:

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/tmp/zookeeper
clientPort=2181
    • = 2000 tickTime  : popular point is called 滴答时间, is the heartbeat interval, the default is 2000 milliseconds, i.e. once every two seconds heartbeat.

 

    • tickTime for maintaining heartbeat between a client and a server or a server and the server measure of time, i.e., sends a heartbeat every tickTime.

    • Heartbeat action : monitor the working status of the machine; time controls the communication with the leader of the follower heartbeat, when their default (leader and follower) session heartbeat interval is twice as long, i.e. 2 * tickTime.

 

  • = 10 initLimit : follower during the boot process will synchronize all the latest data from the leader, then they can determine the initial state of the external services, leader in allowing the follower to complete the work within the time initLimit. The default value is 10, ie 10 * tickTime. No need to modify the default configuration items, as the number of cluster management ZooKeeper increasing, follower node startup, time synchronization data from the leader node becomes longer accordingly, so in a short time not data synchronization is completed, in this case, the parameters necessary to appropriately adjust large.

  • =. 5 syncLimit : Leader node and a follower node maximum delay time of the heartbeat . In ZooKeeper cluster, leader node heartbeat is detected and all follower nodes to confirm whether the node survival. The default value is 5, i.e. 5 * tickTime.

  • = dataDir / tmp / ZooKeeper : ZooKeeper default directory server that stores the snapshot file. files in the / tmp directory may be deleted automatically, easily lost, need to modify the storage directory.

  • 2181 = the clientPort : Client port ZooKeeper server. ZooKeeper listens to this port, the client receives an access request.

温馨提示: We must learn to look at official documents, to receive first-hand information. Although it is in English, but the words and grammar are relatively simple and easy to follow.
Official website describes as follows:

  • tickTime : the basic time unit in milliseconds used by ZooKeeper. It is used to do heartbeats and the minimum session timeout will be twice the tickTime.

  • dataDir : the location to store the in-memory database snapshots and, unless specified otherwise, the transaction log of updates to the database.

  • clientPort : the port to listen for client connections

Created under the zookeeper-3.4.14 directory data files and logs , as follows:

[root@instance-e5cf5719 zookeeper-3.4.14]# mkdir data
[root@instance-e5cf5719 zookeeper-3.4.14]# mkdir logs

官方文档也有对此进行说明,指出在生产环境中ZooKeeper是会长期运行的,ZooKeeper的存储就需要专门的文件位置进行存储dataDir和logs。folder used to store data in-memory database snapshot, myid cluster file is stored in this folder.

For long running production systems ZooKeeper storage must be managed externally (dataDir and logs).

zoo.cfg modified as follows:

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
# dataDir=/tmp/zookeeper
# 数据文件夹
dataDir=/usr/zookeeper-3.4.14/data
# 日志文件夹
dataLogDir=/usr/zookeeper-3.4.14/logs
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

1.4 start

ZooKeeper into the bin directory:

[root@instance-e5cf5719 zookeeper-3.4.14]# cd bin/
[root@instance-e5cf5719 bin]# ls
README.txt  zkCleanup.sh  zkCli.cmd  zkCli.sh  zkEnv.cmd  zkEnv.sh  zkServer.cmd  zkServer.sh  zkTxnLogToolkit.cmd  zkTxnLogToolkit.sh  zookeeper.out
  • zkCleanup.sh  : ZooKeeper for historical data cleansing, including the transaction log file and snapshot data files

  • zkCli.sh : ZooKeeper server connection command line client

  • zkEnv.sh : Set Environment Variables

  • zkServer.sh : Start ZooKeeper server

Start ZooKeeper:

./zkServer.sh start

Successfully started as shown below:

Here insert a picture described
can view the status of ZooKeeper:

 

./zkServer.sh status

State information as shown below:

Here insert the picture described
can help look under the command ./zkServer.sh

 

[root@instance-e5cf5719 bin]# ./zkServer.sh help
ZooKeeper JMX enabled by default
Using config: /usr/zookeeper-3.4.14/bin/../conf/zoo.cfg
Usage: ./zkServer.sh {start|start-foreground|stop|restart|status|upgrade|print-cmd}
  • Start : Start, start ZooKeeper server for background

  • start-foreground: the front desk to start the server

  • STOP : Stop

  • restart: Restart

  • Status : obtaining state

  • upgrade: Upgrade

  • print-cmd: printing and related ZooKeeper command line parameters

1.5 ZooKeeper client connection

Connect:

./zkCli.sh -server 127.0.0.1:2181

which is

./zkCli.sh -server <ip>:<port>

The results are as follows:

Here insert the picture described
can help get more relevant commands:

 

[zk: 127.0.0.1:2181(CONNECTED) 0] help
ZooKeeper -server host:port cmd args
    stat path [watch]
    set path data [version]
    ls path [watch]
    delquota [-n|-b] path
    ls2 path [watch]
    setAcl path acl
    setquota -n|-b val path
    history 
    redo cmdno
    printwatches on|off
    delete path [version]
    sync path
    listquota path
    rmr path
    get path [watch]
    create [-s] [-e] path data acl
    addauth scheme auth
    quit 
    getAcl path
    close 
    connect host:port

Commonly used commands:

command description
help All the operations command
stat Check node state, i.e., determines whether the node is present
set Update node data
get Obtaining node data
ls path [watch] Use the ls command to view the contents of the current znode
create Ordinary created;  -s  it contains a sequence; -e temporary (or restart timeout disappear)
delete Delete Node
rmr Recursive delete nodes

It may be related commands some simple tests, create a new znode (run create / zk_test my_data), information which comes to "my_data".

[zk: 127.0.0.1:2181(CONNECTED) 1] create /zk_test my_data
Created /zk_test
[zk: 127.0.0.1:2181(CONNECTED) 2] ls /
[zookeeper, zk_test]

We can see zk_test create success. You can getlook at the information in the node zk_test command:

[zk: 127.0.0.1:2181(CONNECTED) 3] get /zk_test
my_data
cZxid = 0x7
ctime = Thu Dec 05 16:32:20 CST 2019
mZxid = 0x7
mtime = Thu Dec 05 16:32:20 CST 2019
pZxid = 0x7
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 7
numChildren = 0

By setcan modify zk_test inside information.

[zk: 127.0.0.1:2181(CONNECTED) 4] set /zk_test junk
cZxid = 0x7
ctime = Thu Dec 05 16:32:20 CST 2019
mZxid = 0x8
mtime = Thu Dec 05 16:37:03 CST 2019
pZxid = 0x7
cversion = 0
dataVersion = 1
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 4
numChildren = 0
[zk: 127.0.0.1:2181(CONNECTED) 5] get /zk_test
junk
cZxid = 0x7
ctime = Thu Dec 05 16:32:20 CST 2019
mZxid = 0x8
mtime = Thu Dec 05 16:37:03 CST 2019
pZxid = 0x7
cversion = 0
dataVersion = 1
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 4
numChildren = 0

By deleteyou can delete nodes.

[zk: 127.0.0.1:2181(CONNECTED) 6] delete /zk_test
[zk: 127.0.0.1:2181(CONNECTED) 7] ls /
[zookeeper]

2 pseudo Cluster Setup

We set up three ZooKeeper to build a pseudo-cluster. We have already built a zookeeper-3.4.14, it will now be two copies named zookeeper-3.4.14-1, zookeeper-3.4.14-2.

[root@instance-e5cf5719 usr]# cp -r zookeeper-3.4.14 zookeeper-3.4.14-1
[root@instance-e5cf5719 usr]# cp -r zookeeper-3.4.14 zookeeper-3.4.14-2

At this point three ZooKeeper files are exactly the same , constructing a pseudo-cluster need to do a little modification for each ZooKeeper configuration file.

In the three ZooKeeper /conf/zoo.cfg modify, primarily to modify three positions: 端口号, 日志路径, 集群配置.

Here insert pictures described
herein inserted pictures described
herein inserted pictures described
in zoo.cfg configuration, adds a set of server configuration, showing ZooKeeper cluster has three nodes, the server configuration has the following format:
server.<myid>=<IP>:<Port1>:<Port2>
  • myid: Is the number of the node, the number of ranges is an integer of 1-255, and must be unique in the cluster .

  • IP: IP address of the node is located, such as in the local environment to 127.0.0.1 or localhost.

  • Port1: Leader and follower node port node synchronization used with the heartbeat data.

  • Port2: Leader election process in carrying out the vote port for communication.

If a pseudo-cluster configuration, since ip is the same, so that different instances Zookeeper communication port number not be the same, they give assigned different port numbers.

In each ZooKeeper file /datacreate a directory are myidfiles, myid file server just have numbers (such as 1, 2, 3).

ZooKeeper services were started three (3 open window to start the service).

The results are as follows:

  • zookeeper-3.4.14

[root@instance-e5cf5719 bin]# ./zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/zookeeper-3.4.14/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@instance-e5cf5719 bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower
  • zookeeper-3.4.14-1

[root@instance-e5cf5719 bin]# ./zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/zookeeper-3.4.14-1/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@instance-e5cf5719 bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/zookeeper-3.4.14-1/bin/../conf/zoo.cfg
Mode: leader
  • zookeeper-3.4.14-2

[root@instance-e5cf5719 bin]# ./zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/zookeeper-3.4.14-2/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@instance-e5cf5719 bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/zookeeper-3.4.14-2/bin/../conf/zoo.cfg
Mode: follower

We can see by looking at the status zookeeper-3.4.14-1 Shi leader, zookeeper-3.4.14 and zookeeper-3.4.14-2 Shi follower.

You can refer to the official website architecture diagram to aid understanding.

Here Insert Picture Description

The zookeeper-3.4.14-1 stopped to observe the re-election of the next leader.

[root@instance-e5cf5719 bin]# ./zkServer.sh stop
ZooKeeper JMX enabled by default
Using config: /usr/zookeeper-3.4.14-1/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED

View the status zookeeper-3.4.14, respectively, and the zookeeper-3.4.14-2.

[root@instance-e5cf5719 bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower
[root@instance-e5cf5719 bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/zookeeper-3.4.14-2/bin/../conf/zoo.cfg
Mode: leader

We can see zookeeper-3.4.14-2 become a leader.

3 cluster model building

Cluster model built with pseudo-cluster is very similar, but ZooKeeper cluster is deployed in different machines, pseudo ZooKeeper cluster is deployed on the same machine, when the /conf/zoo.cfg be modified, because different machines (ip different), you can not modify the port number. Apart from this difference, the other set up exactly the same way with a pseudo-cluster, do not introduce more.

4 Summary

So far we have completed to build ZooKeeper stand-alone version, pseudo-cluster environment. In the production environment to ensure high availability ZooKeeper, be sure to set up a clustered environment.

     

    Guess you like

    Origin www.cnblogs.com/AllIhave/p/12048026.html