Take you read one article! CentOS7 build Hadoop + HBase + Zookeeper cluster

Personal blog navigation page (click on the right link to open a personal blog): Daniel take you on technology stack 

First, the basic environment ready

1, download the installation package (both use the most current stable version, as of May 24, 2017)

-- 下载 jdk-8u131
# wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.tar.gz

-- 下载 hadoop-2.7.3
# wget https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-2.7.3

-- 下载 hbase-1.2.5
# wget http://mirror.bit.edu.cn/apache/hbase/1.2.5

-- 下载 zookeeper-3.4.10
# wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.10

2, modify the hosts file (three cluster hosts using default IP is 192.168.0.100,192.168.0.101,192.168.0.102)

# vim /etc/hosts

--添加以下信息(master、slave1、slave2均需修改)

192.168.0.100 master
192.168.0.101 slave1
192.168.0.102 slave2

3, install JDK

-- 解压jdk安装包
# mkdir /usr/java
# tar -zxvf jdk-8u131-linux-x64.tar.gz -C /usr/java

-- 拷贝jdk至slave1及slave2中
# scp -r /usr/java slave1:/usr
# scp -r /usr/java slave2:/usr

-- 设置jdk环境变量(master、slave1、slave2均需修改)
# vim /etc/environment
JAVA_HOME=/usr/java/jdk1.8.0_131
JRE_HOME=/usr/java/jdk1.8.0_131/jre

# vim /etc/profile
export JAVA_HOME=/usr/java/jdk1.8.0_131
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin

4, set free secret landing

slave1

# ssh-keygen -t rsa
# cp ~/.ssh/id_rsa.pub ~/.ssh/slave1_id_rsa.pub
# scp ~/.ssh/slave1_id_rsa.pub master:~/.ssh/

slave2

# ssh-keygen -t rsa
# cp ~/.ssh/id_rsa.pub ~/.ssh/slave2_id_rsa.pub
# scp ~/.ssh/slave2_id_rsa.pub master:~/.ssh/

master

# ssh-keygen -t rsa
# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
# cat ~/.ssh/slave1_id_rsa.pub >> ~/.ssh/authorized_keys
# cat ~/.ssh/slave2_id_rsa.pub >> ~/.ssh/authorized_kyes

-- 拷贝文件至slave1及slave2
# scp ~/.ssh/authorized_keys slave1:~/.ssh
# scp ~/.ssh/authorized_keys slave2:~/.ssh

5, turn off the firewall and SELINUX (master, slave1, slave2 need to be modified)

-- 关闭防火墙
# systemctl stop firewalld.service
# systemctl disable firewalld.service

-- 关闭SELINUX
# vim /etc/selinux/config
-- 注释掉
#SELINUX=enforcing
#SELINUXTYPE=targeted
-- 添加
SELINUX=disabled

 

Two, Hadoop environment to build

1, extract the setup package and create the basic directory

# tar -zxvf hadoop-2.7.3-x64.tar.gz -C /usr
# cd /usr/hadoop-2.7.3
# mkdir tmp logs hdf hdf/data hdf/name

2, modify the configuration file hadoop

-- 修改 slaves 文件
# vim /usr/hadoop-2.7.3/etc/hadoop/slaves
-- 删除 localhost,添加
slave1
slave2


-- 修改 core-site.xml 文件
# vim /usr/hadoop-2.7.3/etc/hadoop/core-site.xml
-- 在 configuration 节点中添加以下内容
<property>
    <name>fs.default.name</name>
    <value>hdfs://master:9000</value>
</property>
<property>
    <name>hadoop.tmp.dir</name>
    <value>file:/usr/hadoop-2.7.3/tmp</value>
</property>


-- 修改 hdfs-site.xml 文件
# vim /usr/hadoop-2.7.3/etc/hadoop/hdfs-site.xml
-- 在 configuration 节点添加以下内容
<property>
    <name>dfs.datanode.data.dir</name>
    <value>/usr/hadoop-2.7.3/hdf/data</value>
    <final>true</final>
</property>
<property>
    <name>dfs.namenode.name.dir</name>
    <value>/usr/hadoop-2.7.3/hdf/name</value>
    <final>true</final>
</property>


-- 修改 mapred-site.xml 文件
# cp /usr/hadoop-2.7.3/etc/hadoop/mapred-site.xml.template /usr/hadoop-2.7.3/etc/hadoop/mapred-site.xml
# vim /usr/hadoop-2.7.3/etc/hadoop/mapred-site.xml
-- 在 configuration 节点添加以下内容
<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>
<property>
    <name>mapreduce.jobhistory.address</name>
    <value>master:10020</value>
</property>
<property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>master:19888</value>
</property>

-- 修改 yarn-site.xml 文件
# vim /usr/hadoop-2.7.3/etc/hadoop/yarn-site.xml
-- 在 configuration 节点添加以下内容
<property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
    <value>org.apache.mapred.ShuffleHandler</value>
</property>
<property>
    <name>yarn.resourcemanager.address</name>
    <value>master:8032</value>
</property>
<property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>master:8030</value>
</property>
<property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>master:8031</value>
</property>
<property>
    <name>yarn.resourcemanager.admin.address</name>
    <value>master:8033</value>
</property>
<property>
    <name>yarn.resourcemanager.webapp.address</name>
    <value>master:8088</value>
</property>

3, the slave node copy hadoop

# scp -r /usr/hadoop-2.7.3 slave1:/usr
# scp -r /usr/hadoop-2.7.3 slave2:/usr

4, master and slave configuration environment variable hadoop

# vim /etc/profile
-- 添加如下内容
export HADOOP_HOME=/usr/hadoop-2.7.3
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
export HADOOP_LOG_DIR=/usr/hadoop-2.7.3/logs
export YARN_LOG_DIR=$HADOOP_LOG_DIR

-- 保存后执行
# source /etc/profile

# vim ~/.bashrc
-- 添加如下内容
export HADOOP_PREFIX=/usr/hadoop-2.7.3/

-- 保存后执行
# source ~/.bashrc

5, format namenode

# /usr/hadoop-2.7.3/sbin/hdfs namenode -format

6, start hadoop (performed only in the master node)

# ssh master
# /usr/hadoop-2.7.3/sbin/start-all.sh

At this point it has successfully completed the build hadoop environment

Three, Zookeeper environment to build

1. Unzip the installation package zookeeper to master, and the establishment of the basic directory

# tar -zxvf zookeeper-3.4.10.tar.gz -C /usr
# mkdir /usr/zookeeper-3.4.10/data

2, modify the master configuration file

-- 复制配置文件模板
# cp /usr/zookeeper-3.4.10/conf/zoo-sample.cfg /usr/zookeeper-3.4.10/conf/zoo.cfg

-- 修改配置文件
# vim /usr/zookeeper-3.4.10/conf/zoo.cfg
-- 添加如下内容
dataDir=/usr/zookeeper-3.4.10/data
server.1=master:2888:3888
server.2=slave1:2888:3888
server.3=slave2:2888:3888

3, the respective sub-nodes to copy

# scp -r /usr/zookeeper-3.4.10 slave1:/usr
# scp -r /usr/zookeeper-3.4.10 slave2:/usr

4. Create a file myid

-- master节点添加myid文件
# ssh master
# touch /usr/zookeeper-3.4.10/data/myid
# echo 1 > /usr/zookeeper-3.4.10/data/myid

-- slave1节点添加myid文件
# ssh slave1
# touch /usr/zookeeper-3.4.10/data/myid
# echo 2 > /usr/zookeeper-3.4.10/data/myid

-- slave2节点添加myid文件
# ssh slave2
# touch /usr/zookeeper-3.4.10/data/myid
# echo 3 > /usr/zookeeper-3.4.10/data/myid

5, start zookeeper (master, slave1, slave2 required to execute)

-- 启动master
# ssh master
# cd /usr/zookeeper-3.4.10/bin
# ./zkServer.sh start

-- 启动slave1
# ssh slave1
# cd /usr/zookeeper-3.4.10/bin
# ./zkServer.sh start

-- 启动slave2
# ssh slave2
# cd /usr/zookeeper-3.4.10/bin
# ./zkServer.sh start

With this step completed the build environment zookeeper

Four, HBase environment to build

1, installation package decompression hbase

# tar -zxvf hbase-1.2.5-bin.star.gz -C /usr
# mkdir /usr/hbase-1.2.5-bin/logs

2, modify environment variables (hbase-env.sh) started to be used when Hbase

-- 打开环境变量配置文件
# vim /usr/hbase-1.2.5/conf/hbase-env.sh

-- 添加如下内容
-- 1、设置java安装路径
export JAVA_HOME=/usr/java/jdk1.8.0_131
-- 2、设置hbase的日志地址
export HBASE_LOG_DIR=${HBASE_HOME}/logs
-- 3、设置是否使用hbase管理zookeeper(因使用zookeeper管理的方式,故此参数设置为false)
export HBASE_MANAGES_ZK=false
-- 4、设置hbase的pid文件存放路径
export HBASE_PID_DIR=/var/hadoop/pids

3, add all of the region to regionservers file server

-- 打开regionservers配置文件
# vim /usr/hbase-1.2.5/conf/regionservers

-- 删除localhost,新增如下内容
master
slave1
slave2

Note: hbase when you start or shut down in turn to iterate each row to enable or disable all of the region server processes

4, Hbase modify configuration information for the cluster (hbase-site.xml), the configuration will override the default configuration of Hbase

-- 打开配置文件
# vim /usr/hbase-1.2.5/conf/hbase-site.xml


-- 在configuration节点下添加如下内容

<property>
    <name>hbase.rootdir</name>
    <value>hdfs://master:9000/hbase</value>
</property>
<property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
</property>
<property>
    <name>hbase.zookeeper.quorum</name>
    <value>master,slave1,slave2</value>
</property>
<property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/usr/zookeeper-3.4.10/data</value>
</property>
<property>
    <name>hbase.master</name>
    <value>hdfs://master:60000</value>
</property>

5, copied to the slave in hbase

# scp -r /usr/hbase-1.2.5 slave1:/usr
# scp -r /usr/hbase-1.2.5 slave2:/usr

6, start hbase (can be executed on the master node only)

# ssh master
# /usr/hbase-1.2.5/bin/start-hbase.sh

At this point hbase environment to build complete

Attached Java / C / C ++ / machine learning / Algorithms and Data Structures / front-end / Android / Python / programmer reading / single books books Daquan:

(Click on the right to open there in the dry personal blog): Technical dry Flowering
===== >> ① [Java Daniel take you on the road to advanced] << ====
===== >> ② [+ acm algorithm data structure Daniel take you on the road to advanced] << ===
===== >> ③ [database Daniel take you on the road to advanced] << == ===
===== >> ④ [Daniel Web front-end to take you on the road to advanced] << ====
===== >> ⑤ [machine learning python and Daniel take you entry to the Advanced Road] << ====
===== >> ⑥ [architect Daniel take you on the road to advanced] << =====
===== >> ⑦ [C ++ Daniel advanced to take you on the road] << ====
===== >> ⑧ [ios Daniel take you on the road to advanced] << ====
=====> > ⑨ [Web security Daniel take you on the road to advanced] << =====
===== >> ⑩ [Linux operating system and Daniel take you on the road to advanced] << = ====

There is no unearned fruits, hope you young friends, friends want to learn techniques, overcoming all obstacles in the way of the road determined to tie into technology, understand the book, and then knock on the code, understand the principle, and go practice, will It will bring you life, your job, your future a dream.

Published 47 original articles · won praise 0 · Views 280

Guess you like

Origin blog.csdn.net/weixin_41663412/article/details/104860439