hadoop3.1.3完全分布模式安装

准备:三台centos7

	   jdk1.8
	   hadoop3.1.3

三台分别设置静态ip为

master:192.168.152.100
node1:192.168.152.101
node2:192.168.152.102

修改网卡(这是master的)

vi /etc/sysconfig/network-scripts/ifcfg-ens32

TYPE=Ethernet
BOOTPROTO=static
NAME=ens32
DEVICE=ens32
ONBOOT=yes
IPADDR=192.168.152.100
GATEWAY=192.168.152.2
NETMASK=255.255.255.0
DNS1=8.8.8.8

systemctl restart network
systemctl stop firewalld
setenforce 0

配置免密登录

vi /etc/hosts

192.168.152.100 master
192.168.152.101 node1
192.168.152.102 node2

cp给另外两台node

scp /etc/hosts node1:/etc/hosts
scp /etc/hosts node2:/etc/hosts
ssh-keygen -t rsa(三台都要)
cd /root/.ssh/
cat id_rsa.pub>>authorized_keys
将authorized_keys拷贝的node1
然后将id_rsa.pub加到authorized_keys中在拷贝给node2
然后将id_rsa.pub加到authorized_keys中再拷贝给另外两台
此时的authorized_keys为获取三台的keys

配置jdk

将jdk压缩包放到/opt下并解压
vi /etc/profile

#java environment
export JAVA_HOME=/opt/jdk1.8.0_231
export CLASSPATH=.:${JAVA_HOME}/jre/lib/rt.jar:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jar
export PATH=$PATH:${JAVA_HOME}/bin

source /etc/profile
java -version
出现java版本即安装成功

配置hadoop

将hadoop安装包放到/opt下并解压
vi /etc/profile

#hadoop environment
export HADOOP_HOME=/opt/hadoop-3.1.3
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin

source /etc/profile

mkdir /usr/local/hadoop
mkdir /usr/local/hadoop/data
mkdir /usr/local/hadoop/data/tmp
mkdir /usr/local/hadoop/dfs
mkdir /usr/local/hadoop/dfs/data
mkdir /usr/local/hadoop/dfs/name
mkdir /usr/local/hadoop/tmp

cd /opt/hadoop-3.1.3/etc/hadoop/
vi hadoop-env.sh

export JAVA_HOME=/opt/jdk1.8.0_231

vi core-site.xml

 <configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/usr/local/hadoop/tmp</value>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master:9000</value>
    </property>
</configuration>

vi hdfs-site.xml

<configuration>
    <property>
        <name>dfs.namenode.http-address</name>
        <value>master:50070</value>
    </property>
    <property><!--namenode持久存储名字空间及事务日志的本地文件系统路径-->
        <name>dfs.namenode.name.dir</name>
        <value>/usr/local/hadoop/dfs/name</value>
     </property>
     <property><!--DataNode存放块数据的本地文件系统路径-->
         <name>dfs.datanode.data.dir</name>
         <value>/usr/local/hadoop/dfs/data</value>
     </property>
     <property><!--数据需要备份的数量,不能大于集群的机器数量,默认为3-->
        <name>dfs.replication</name>
        <value>2</value>
    </property>
</configuration>

vi yarn-site.xml

<configuration>
    <property><!--NodeManager上运行的附属服务,用于运行mapreduce-->
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>master:8088</value>
    </property>
</configuration>

vi mapred-site.xml

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

vi workers
删除localhost改成

node1
node2

vi /opt/hadoop-3.1.3/sbin/start-yarn.sh
vi /opt/hadoop-3.1.3/sbin/stop-yarn.sh
加入

YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root

vi /opt/hadoop-3.1.3/sbin/start-dfs.sh
vi /opt/hadoop-3.1.3/sbin/stop-dfs.sh

HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root

将改的文件全部分发给子节点

scp -r /opt/hadoop-3.1.3/etc/hadoop/ node1:/opt/hadoop-3.1.3/etc/hadoop/
scp -r /opt/hadoop-3.1.3/etc/hadoop/ node2:/opt/hadoop-3.1.3/etc/hadoop/
scp -r /opt/hadoop-3.1.3/sbin/ node1:/opt/hadoop-3.1.3/sbin/
scp -r /opt/hadoop-3.1.3/sbin/ node2:/opt/hadoop-3.1.3/sbin/
scp /opt/hadoop-3.1.3/etc/hadoop/* node1:/opt/hadoop-3.1.3/etc/hadoop/
scp /opt/hadoop-3.1.3/etc/hadoop/* node2:/opt/hadoop-3.1.3/etc/hadoop/

hadoop classpath
复制路径
然后vi /opt/hadoop-3.1.3/etc/hadoop/yarn-site.xml
在configuration中加入(value值是刚复制的路径)

<property>
        <name>yarn.resourcemanager.hostname</name>
        <value>master</value>
</property>
<property>
   <name>yarn.application.classpath</name>
<value>/opt/hadoop-3.1.3/etc/hadoop:/opt/hadoop-3.1.3/share/hadoop/common/lib/*:/opt/hadoop-3.1.3/share/hadoop/common/*:/opt/hadoop-3.1.3/share/hadoop/hdfs:/opt/hadoop-3.1.3/share/hadoop/hdfs/lib/*:/opt/hadoop-3.1.3/share/hadoop/hdfs/*:/opt/hadoop-3.1.3/share/hadoop/mapreduce/lib/*:/opt/hadoop-3.1.3/share/hadoop/mapreduce/*:/opt/hadoop-3.1.3/share/hadoop/yarn:/opt/hadoop-3.1.3/share/hadoop/yarn/lib/*:/opt/hadoop-3.1.3/share/hadoop/yarn/*</value>
</property>

在这里插入图片描述
分发
scp /opt/hadoop-3.1.3/etc/hadoop/* node1:/opt/hadoop-3.1.3/etc/hadoop/
scp /opt/hadoop-3.1.3/etc/hadoop/* node2:/opt/hadoop-3.1.3/etc/hadoop/

初始化hadoop

cd /opt/hadoop-3.1.3/bin
./hadoop namenode -format

启动hadoop

cd /opt/hadoop-3.1.3/sbin
./start-all.sh
访问http://192.168.152.100:8088
http://192.168.152.100:50070

hadoop跑自带的wordcount程序

hdfs dfs -ls /
hdfs dfs -mkdir /input
向hdfs上传一个文件作为跑wordcount的文本
hdfs dfs -put /etc/httpd/conf/httpd.conf /input
cd /opt/hadoop-3.1.3/share/hadoop/mapreduce/
hadoop jar hadoop-mapreduce-examples-3.1.3.jar wordcount /input/httpd.conf /output
hdfs dfs -cat /output/part-r-00000
这里注意自己的版本

如果hadoop运行报错进入安全模式
执行命令退出安全模式

hadoop dfsadmin -safemode leave

执行健康检查,删除损坏掉的block

hdfs fsck  /  -delete
发布了10 篇原创文章 · 获赞 16 · 访问量 948

猜你喜欢

转载自blog.csdn.net/qq_43519542/article/details/103407286
今日推荐