CentOS7 builds Hadoop + HBase + Zookeeper cluster

 

First, the basic environment preparation

1. Download the installation package (all use the latest stable version, as of May 24, 2017)

-- 下载 jdk-8u131
# wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.tar.gz

-- 下载 hadoop-2.7.3
# wget https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-2.7.3

-- 下载 hbase-1.2.5
# wget http://mirror.bit.edu.cn/apache/hbase/1.2.5

-- 下载 zookeeper-3.4.10
# wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.10

2. Modify the hosts file (the default IPs of the three cluster hosts used are 192.168.0.100, 192.168.0.101, 192.168.0.102)

# vim /etc/hosts

--添加以下信息(master、slave1、slave2均需修改)

192.168.0.100 master
192.168.0.101 slave1
192.168.0.102 slave2

3. Install JDK

-- 解压jdk安装包
# mkdir /usr/java
# tar -zxvf jdk-8u131-linux-x64.tar.gz -C /usr/java

-- 拷贝jdk至slave1及slave2中
# scp -r /usr/java slave1:/usr
# scp -r /usr/java slave2:/usr

-- 设置jdk环境变量(master、slave1、slave2均需修改)
# vim /etc/environment
JAVA_HOME=/usr/java/jdk1.8.0_131
JRE_HOME=/usr/java/jdk1.8.0_131/jre

# vim /etc/profile
export JAVA_HOME=/usr/java/jdk1.8.0_131
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin

4. Set password-free login

slave1

# ssh-keygen -t rsa
# cp ~/.ssh/id_rsa.pub ~/.ssh/slave1_id_rsa.pub
# scp ~/.ssh/slave1_id_rsa.pub master:~/.ssh/

slave2

# ssh-keygen -t rsa
# cp ~/.ssh/id_rsa.pub ~/.ssh/slave2_id_rsa.pub
# scp ~/.ssh/slave2_id_rsa.pub master:~/.ssh/

master

# ssh-keygen -t rsa
# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
# cat ~/.ssh/slave1_id_rsa.pub >> ~/.ssh/authorized_keys
# cat ~/.ssh/slave2_id_rsa.pub >> ~/.ssh/authorized_kyes

-- 拷贝文件至slave1及slave2
# scp ~/.ssh/authorized_keys slave1:~/.ssh
# scp ~/.ssh/authorized_keys slave2:~/.ssh

5. Turn off firewall and SELINUX (master, slave1, slave2 all need to be modified)

-- 关闭防火墙
# systemctl stop firewalld.service
# systemctl disable firewalld.service

-- 关闭SELINUX
# vim /etc/selinux/config
-- 注释掉
#SELINUX=enforcing
#SELINUXTYPE=targeted
-- 添加
SELINUX=disabled

 

Second, Hadoop environment construction

1. Unzip the installation package and create a base directory

# tar -zxvf hadoop-2.7.3-x64.tar.gz -C /usr
# cd /usr/hadoop-2.7.3
# mkdir tmp logs hdf hdf/data hdf/name

2. Modify the hadoop configuration file

-- 修改 slaves 文件
# vim /usr/hadoop-2.7.3/etc/hadoop/slaves
-- 删除 localhost,添加
slave1
slave2


-- 修改 core-site.xml 文件
# vim /usr/hadoop-2.7.3/etc/hadoop/core-site.xml
-- 在 configuration 节点中添加以下内容
<property>
    <name>fs.default.name</name>
    <value>hdfs://master:9000</value>
</property>
<property>
    <name>hadoop.tmp.dir</name>
    <value>file:/usr/hadoop-2.7.3/tmp</value>
</property>


-- 修改 hdfs-site.xml 文件
# vim /usr/hadoop-2.7.3/etc/hadoop/hdfs-site.xml
-- 在 configuration 节点添加以下内容
<property>
    <name>dfs.datanode.data.dir</name>
    <value>/usr/hadoop-2.7.3/hdf/data</value>
    <final>true</final>
</property>
<property>
    <name>dfs.namenode.name.dir</name>
    <value>/usr/hadoop-2.7.3/hdf/name</value>
    <final>true</final>
</property>


-- 修改 mapred-site.xml 文件
# cp /usr/hadoop-2.7.3/etc/hadoop/mapred-site.xml.template /usr/hadoop-2.7.3/etc/hadoop/mapred-site.xml
# vim /usr/hadoop-2.7.3/etc/hadoop/mapred-site.xml
-- 在 configuration 节点添加以下内容
<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>
<property>
    <name>mapreduce.jobhistory.address</name>
    <value>master:10020</value>
</property>
<property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>master:19888</value>
</property>

-- 修改 yarn-site.xml 文件
# vim /usr/hadoop-2.7.3/etc/hadoop/yarn-site.xml
-- 在 configuration 节点添加以下内容
<property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
    <value>org.apache.mapred.ShuffleHandler</value>
</property>
<property>
    <name>yarn.resourcemanager.address</name>
    <value>master:8032</value>
</property>
<property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>master:8030</value>
</property>
<property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>master:8031</value>
</property>
<property>
    <name>yarn.resourcemanager.admin.address</name>
    <value>master:8033</value>
</property>
<property>
    <name>yarn.resourcemanager.webapp.address</name>
    <value>master:8088</value>
</property>

3. Copy hadoop to the slave node

# scp -r /usr/hadoop-2.7.3 slave1:/usr
# scp -r /usr/hadoop-2.7.3 slave2:/usr

4. Configure hadoop environment variables for master and slave

# vim /etc/profile
-- 添加如下内容
export HADOOP_HOME=/usr/hadoop-2.7.3
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
export HADOOP_LOG_DIR=/usr/hadoop-2.7.3/logs
export YARN_LOG_DIR=$HADOOP_LOG_DIR

-- 保存后执行
# source /etc/profile

# vim ~/.bashrc
-- 添加如下内容
export HADOOP_PREFIX=/usr/hadoop-2.7.3/

-- 保存后执行
# source ~/.bashrc

5. Format the namenode

# /usr/hadoop-2.7.3/sbin/hdfs namenode -format

6. Start hadoop (only executed on the master node)

# ssh master
# /usr/hadoop-2.7.3/sbin/start-all.sh

At this point, the construction of the hadoop environment has been successfully completed.

3. Zookeeper environment construction

1. Unzip the zookeeper installation package to the master and create a base directory

# tar -zxvf zookeeper-3.4.10.tar.gz -C /usr
# mkdir /usr/zookeeper-3.4.10/data

2. Modify the master configuration file

-- 复制配置文件模板
# cp /usr/zookeeper-3.4.10/conf/zoo-sample.cfg /usr/zookeeper-3.4.10/conf/zoo.cfg

-- 修改配置文件
# vim /usr/zookeeper-3.4.10/conf/zoo.cfg
-- 添加如下内容
dataDir=/usr/zookeeper-3.4.10/data
server.1=master:2888:3888
server.2=slave1:2888:3888
server.3=slave2:2888:3888

3. Copy to each child node

# scp -r /usr/zookeeper-3.4.10 slave1:/usr
# scp -r /usr/zookeeper-3.4.10 slave2:/usr

4. Create myid file

-- master节点添加myid文件
# ssh master
# touch /usr/zookeeper-3.4.10/data/myid
# echo 1 > /usr/zookeeper-3.4.10/data/myid

-- slave1节点添加myid文件
# ssh slave1
# touch /usr/zookeeper-3.4.10/data/myid
# echo 2 > /usr/zookeeper-3.4.10/data/myid

-- slave2节点添加myid文件
# ssh slave2
# touch /usr/zookeeper-3.4.10/data/myid
# echo 3 > /usr/zookeeper-3.4.10/data/myid

5. Start zookeeper (master, slave1, and slave2 all need to be executed)

-- 启动master
# ssh master
# cd /usr/zookeeper-3.4.10/bin
# ./zkServer.sh start

-- 启动slave1
# ssh slave1
# cd /usr/zookeeper-3.4.10/bin
# ./zkServer.sh start

-- 启动slave2
# ssh slave2
# cd /usr/zookeeper-3.4.10/bin
# ./zkServer.sh start

At this point, the construction of the zookeeper environment is completed

Four, HBase environment construction

1. Unzip the hbase installation package

# tar -zxvf hbase-1.2.5-bin.star.gz -C /usr
# mkdir /usr/hbase-1.2.5-bin/logs

2. Modify the environment variables to be used when Hbase starts (hbase-env.sh)

-- 打开环境变量配置文件
# vim /usr/hbase-1.2.5/conf/hbase-env.sh

-- 添加如下内容
-- 1、设置java安装路径
export JAVA_HOME=/usr/java/jdk1.8.0_131
-- 2、设置hbase的日志地址
export HBASE_LOG_DIR=${HBASE_HOME}/logs
-- 3、设置是否使用hbase管理zookeeper(因使用zookeeper管理的方式,故此参数设置为false)
export HBASE_MANAGES_ZK=false
-- 4、设置hbase的pid文件存放路径
export HBASE_PID_DIR=/var/hadoop/pids

3. Add all region servers to the regionservers file

-- 打开regionservers配置文件
# vim /usr/hbase-1.2.5/conf/regionservers

-- 删除localhost,新增如下内容
master
slave1
slave2

Note: hbase will iteratively access each row in turn when starting or shutting down to start or shut down all region server processes

4. Modify the basic configuration information (hbase-site.xml) of the Hbase cluster, which will override the default configuration of Hbase

-- 打开配置文件
# vim /usr/hbase-1.2.5/conf/hbase-site.xml


-- 在configuration节点下添加如下内容

<property>
    <name>hbase.rootdir</name>
    <value>hdfs://master:9000/hbase</value>
</property>
<property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
</property>
<property>
    <name>hbase.zookeeper.quorum</name>
    <value>master,slave1,slave2</value>
</property>
<property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/usr/zookeeper-3.4.10/data</value>
</property>
<property>
    <name>hbase.master</name>
    <value>hdfs://master:60000</value>
</property>

5. Copy hbase to slave

# scp -r /usr/hbase-1.2.5 slave1:/usr
# scp -r /usr/hbase-1.2.5 slave2:/usr

6. Start hbase (only execute on the master node)

# ssh master
# /usr/hbase-1.2.5/bin/start-hbase.sh

At this point, the hbase environment is built.

 

I will write this first today, and then add hbase console operations and call hbase through java api to perform basic operations

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324541799&siteId=291194637