hadoop3.1.1 HA高可用分布式集群安装部署

1、环境介绍

  服务器:CentOS 6.8 其中:2 台 namenode、3 台 datanode

  JDK:jdk-8u191-linux-x64.tar.gz

hadoop:hadoop-3.1.1.tar.gz

节点信息:

节点 IP namenode datanode resourcemanager journalnode
namenode1 192.168.67.101  
namenode2 192.168.67.102  
datanode1 192.168.67.103    
datanode2 192.168.67.104    
datanode3 192.168.67.105    

2、配置ssh免密登陆

  2.1 在每台机器上执行 ssh-keygen -t rsa

  2.2 vim ~/.ssh/id_rsa.pub 将所有机器上的公钥内容汇总到 authorized_keys 文件并分发到每台机器上。

  2.3 授权 chmod 600 ~/.ssh/authorized_keys

3、配置hosts: 

vim /etc/hosts

#增加如下配置
192.168.67.101 namenode1
192.168.67.102 namenode2
192.168.67.103 datanode1
192.168.67.104 datanode2
192.168.67.105 datanode3
#将hosts文件分发至其他机器
scp -r /etc/hosts namenode2:/etc/hosts
scp -r /etc/hosts datanode1:/etc/hosts
scp -r /etc/hosts datanode2:/etc/hosts
scp -r /etc/hosts datanode3:/etc/hosts

4、关闭防火墙

扫描二维码关注公众号,回复: 5041132 查看本文章
service iptables stop
chkconfig iptables off

5、安装JDK

tar -zxvf /usr/local/soft/jdk-8u191-linux-x64.tar.gz -C /usr/local/

vim /etc/profile

#增加JDK环境变量内容
export JAVA_HOME=/usr/local/jdk1.8.0_191
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH
使环境变量生效:source /etc/profile

 6、安装hadoop

tar -zxvf /usr/local/soft/hadoop-3.1.1.tar.gz -C /usr/local/
vim /etc/profile

#增加hadoop环境变量内容
export HADOOP_HOME=/usr/local/hadoop-3.1.1
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/lib
使环境变量生效:source /etc/profile
#修改 start-dfs.sh 和 stop-dfs.sh 两个文件,增加配置
vim /usr/local/hadoop-3.1.1/sbin/start-dfs.sh
vim /usr/local/hadoop-3.1.1/sbin/stop-dfs.sh

#增加启动用户
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=root
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
HDFS_JOURNALNODE_USER=root
HDFS_ZKFC_USER=root
#修改 start-yarn.sh 和 stop-yarn.sh 两个文件,增加配置
vim /usr/local/hadoop-3.1.1/sbin/start-yarn.sh
vim /usr/local/hadoop-3.1.1/sbin/stop-yarn.sh

#增加启动用户
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=root
YARN_NODEMANAGER_USER=root
vim /usr/local/hadoop-3.1.1/etc/hadoop/hadoop-env.sh

#增加内容
export JAVA_HOME=/usr/local/jdk1.8.0_191
export HADOOP_HOME=/usr/local/hadoop-3.1.1
#修改 workers 文件内容
vim /usr/local/hadoop-3.1.1/etc/hadoop/workers

#替换内容为 datanode1 datanode2 datanode3
vim /usr/local/hadoop-3.1.1/etc/hadoop/core-site.xml

#修改为如下配置
<configuration>
    <!-- 指定hdfs的nameservice为nameservice -->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://mycluster/</value>
    </property>

    <!-- 指定hadoop临时目录 -->
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/usr/local/hadoop-3.1.1/hdfs/temp</value> 
    </property>

    <!-- 指定zookeeper地址 -->
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>192.168.67.1:2181</value>
    </property>
</configuration>

猜你喜欢

转载自www.cnblogs.com/lufan2008/p/10312085.html
今日推荐