Hadoop 安装配置及下载地址

使用安装包版本:
hadoop-2.6.0.tar.gz
下载地址 https://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.10.0/hadoop-2.10.0-src.tar.gz/

主机IP映射

[root@master conf]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.176.41 master
192.168.176.42 slave1
192.168.176.43 slave2

环境变量~/.bash_profile

[root@master hadoop]# vi ~/.bash_profile
JAVA_HOME=/usr/local/src/jdk1.8.0_221
HADOOP_HOME=/usr/local/src/hadoop-2.6.0
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export PATH  CLASSPATH HADOOP_HOME

一、hadoop配置

1、将hadoop压缩包解压到/usr/local/src

 [root@master tmp]# tar -zxvf hadoop-2.6.0.tar.gz -C /usr/local/src

2、进入hadoop的etc/hadoop目录

 cd /usr/local/src/hadoop-2.6.0/etc/hadoop/

3、修改完的 hadoop-env.sh 文件

[root@master hadoop]# vi hadoop-env.sh
The java implementation to use.
export JAVA_HOME=/usr/local/src/jdk1.8.0_221
export HADOOP_CONF_DIR=/usr/local/src/hadoop-2.6.0/etc/hadoop
[root@master hadoop]#

4、配置core-site.xml

[root@test1 hadoop]# vi core-site.xml
<!--用来指定hdfs的老大,namenode的地址-->
        <property>
                <name>fs.defaultFS.name</name>
                <value>hdfs://master:9000</value>
        </property>
        <!--用来指定hadoop运行时产生文件的存放目录-->
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/usr/local/src/hadoop-2.6.0/hdfs/tmp</value>
        </property>

5、修改mapred-site.xml

[root@test1 hadoop]# vi mapred-site.xml
<configuration>
        <!--指定mapreduce运行在yarn上-->
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
</configuration>

6、配置yarn-site.xml

[root@test1 hadoop]# vi yarn-site.xml      
<configuration>

        <!-- Site specific YARN configuration properties -->
        <!--指定yarn的老大resoucemanager的地址-->
        <property>
                <name>yarn.resourcemanager.hostname</name>
                <value>master</value>
        </property>
        <!--NodeManager获取数据的方式-->
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
        
         <!--忽略虚拟内存检查,对虚拟机比较好-->
        <property>
                <name>yarn.nodemanager.vmem-check-enabled</name>
                <value>false</value>
        </property>

</configuration>  

7、在 hadoop-2.6.0目录创建journal,在tmp目录创建datanode namenode

[root@master hadoop-2.6.0]# pwd
/usr/local/src/hadoop-2.6.0
[root@master hadoop-2.6.0]# mkdir journal

[root@master hadoop-2.6.0]# cd tmp/
[root@master tmp]# mkdir namenode datanode
[root@master tmp]# ls
datanode  namenode  nm-local-dir

8、在slaves中指定 datanode节点

[root@master hadoop]# vi slaves
master
slave1
slave2

9、将hadoop复制到其他节点

[root@master tmp]# scp -r /usr/local/src/hadoop-2.6.0/ slave1:/usr/local/src/

[root@master tmp]# scp -r /usr/local/src/hadoop-2.6.0/ slave2:/usr/local/src/

10、执行 source hadoop-env.sh,让配置生效----3台

[root@master hadoop]# pwd
/usr/local/src/hadoop-2.6.0/etc/hadoop
[root@master hadoop]# source hadoop-env.sh

二、验证

1、查看版本

[root@master hadoop]# hadoop version
Hadoop 2.6.0
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
Compiled by jenkins on 2014-11-13T21:10Z
Compiled with protoc 2.5.0
From source with checksum 18e43357c8f927c0695f1e9522859d6a
This command was run using /usr/local/src/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0.jar
[root@master hadoop]#

2、启动hadoop

[root@master hadoop]# start-all.sh
[root@master hadoop]# jps
2113 JournalNode
10513 Jps
1906 DataNode
1603 QuorumPeerMain
1797 NameNode
2310 DFSZKFailoverController
2758 JobHistoryServer
9862 RunJar
2426 ResourceManager
[root@master hadoop]#

显示有NameNode,分节点有DataNode就行了

3、打开网址master:50070

需要配置本地映射
在这里插入图片描述

发布了15 篇原创文章 · 获赞 2 · 访问量 479

猜你喜欢

转载自blog.csdn.net/weixin_44593925/article/details/103865074