Zookeeper distributed cluster deployment

After the deployment is complete Hadoop / Hbase cluster, start the test Sleuthkit, and found a connection error constant reminder of tpkickoff.sh zookeeper runtime: Session 0x0 for server. Most of the information online, said the problem lies in resolving DNS, you need to modify the / etc / hosts file, but before the deployment of the Distributed File already set up, so the problem should not be here. So consider this strange problem will not start because of a problem Hbase zookeeper comes with it? While not quite clear on the principle of internal, but still decided to install a separate zookeeper to try. Let me talk about his platform: --hadoop-1.1.2 --hbase-0.90.0 --zookeeper -3.4.5 --jdk-1.6 Well, now zookeeper-3.4.5 to install it! First, the official website to download the 3.4.5-ZooKeeper: http://www.apache.org/dyn/closer.cgi/zookeeper/ Second, the download will get the zookeeper-3.4.5.tar.gz into the specified directory unified in the / home / hadoop / platform / under, tar unzip Third, set the environment variable: in order to facilitate zkServer.sh run the script, we will zookeeper's bin path added to the / etc / profile, as an global variables are output to the PATH, remember to run after the completion of editing source / etc / profile for the changes to take effect ZOOKEEPER_HOME = / home / hadoop / platform / zookeeper-3.4.5 export ZOOKEEPER_HOME export PATH = $ PATH: $ ZOOKEEPER_HOME / bin: $ ZOOKEEPER_HOME / conf scp then use the modified copy command to each node 26275986_1382425568t8Dn 26275986_1382425691mEyf Fourth, the establishment zookeeper configuration file (can start on a node configuration, the latter can be directly copied to other nodes) into the conf zookeeper /, the zoo_sample.cfg copy to zoo.cfg:
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
dataDir=/home/hadoop/platform/zookeeper-data
# the port at which the clients will connect
clientPort=2181
#下面是参与的4个节点
server.1=master:2888:3888
server.2=node1:2888:3888
server.3=node2:2888:3888
Port 2181, set dataDir to a specified directory for coordinating data zookeeper, and finally add the nodes in the cluster. The next step is to establish the number dataDir server.X on respective nodes myid file, enter their "X" as the content can be, for example myid the master node 1 can only input. Fifth, the zookeeper-3.4.5 copy to each of the other nodes used here scp -r zookeeper-3.4.5 / hadoop @ node1: achieve, attention myid dataDir files in each node to be modified to its Server-X Number Six, run zkServer.sh were run zkServer.sh start command on all nodes: 26275986_1382425705stv1 It should be noted that the first node startup due to other nodes in the cluster zookeeper zookeeper not start, so use zkServer.sh status will prompt an error command to view the current status, FIG as white areas; but as zookeeper were launched subsequent node, displays the status of the current node status check the status of use, as the master of this leader. Configured rerun tpkickoff.sh, finally no mistake as reported before the zookeeper: 26275986_1382425841sjiv

Reproduced in: https: //my.oschina.net/766/blog/211149

Guess you like

Origin blog.csdn.net/weixin_34375233/article/details/91547548