Linux环境安装Hadoo

Hadoop Linux环境安装
1. 安装环境
Linux CentOS 7操作系统
JDK1.8
Hadoop 3.1.1
2. 具体步骤
2.1. 安装JDK1.8 官网下载
解压文件(命令:tar -zxvf jdk-8u191-linux-x64.tar.gz) 。
配置环境变量(命令:vi/etc/profile)在末尾添加一下配置:
export JAVA_HOME=/home/tools/jdk1.8.0_191
export JRE_HOME=/home/tools/jdk1.8.0_191/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
1
2
3
4
刷新配置文件(命令:source /etc/profile)
验证是否安装成功(命令:java -version)
2.2 安装Hadoop 官网下载 版本3.1.1
解压文件(命令:tar -zxvf hadoop-3.1.1.tar.gz -C hadoop)
配置环境变量(命令:vi/etc/profile)在末尾添加一下配置:
export HADOOP_HOME=/home/hadoop/hadoop
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
1
2
3
4
5
6
7
8
刷新环境变量文件(命令:source /etc/profile)
Hadoop 配置文件修改(文件路径:$HADOOP_HOME/etc/hadoop)
core-site.xml

<configuration>
    <property>
      <name>fs.default.name</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>
1
2
3
4
5
6
hdfs-site.xml

<configuration>
    <property>
     <name>dfs.replication</name>
     <value>1</value>
    </property>
    <property>
      <name>dfs.name.dir</name>
        <value>file:///home/hadoop/hadoopdata/hdfs/namenode</value>
    </property>
    <property>
      <name>dfs.data.dir</name>
        <value>file:///home/hadoop/hadoopdata/hdfs/datanode</value>
    </property>
</configuration>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
修改mapred-site.xml

<configuration>
    <property>
     <name>mapreduce.framework.name</name>
      <value>yarn</value>
    </property>
</configuration>
1
2
3
4
5
6
修改yarn-site.xml

<configuration>
     <property>
      <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
     </property>
</configuration>
1
2
3
4
5
6
格式化名称节点(命令:hdfs namenode -format)
[hadoopuser@iz9lk7cs77ry8dz:~]$ hdfs namenode -format
WARNING: /home/hadoop/hadoop/logs does not exist. Creating.
2018-11-19 23:54:29,187 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = iz9lk7cs77ry8dz/172.18.13.92
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 3.1.1
……
……
……
2018-11-19 23:54:32,353 INFO namenode.FSImageFormatProtobuf: Saving image file /home/hadoop/hadoopdata/hdfs/namenode/current/fsimage.ckpt_0000000000000000000 using no compression
2018-11-19 23:54:32,483 INFO namenode.FSImageFormatProtobuf: Image file /home/hadoop/hadoopdata/hdfs/namenode/current/fsimage.ckpt_0000000000000000000 of size 389 bytes saved in 0 seconds .
2018-11-19 23:54:32,508 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
2018-11-19 23:54:32,514 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at iz9lk7cs77ry8dz/172.18.13.92
************************************************************/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
进入sbin文件(命令:cd $HADOOP_HOME/sbin)
启动start-dfs.sh脚本(命令: ./start-dfs.sh)
[hadoopuser@iz9lk7cs77ry8dz:~/hadoop/sbin] $./start-dfs.sh 
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [iz9lk7cs77ry8dz]
2018-11-24 21:28:19,715 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
1
2
3
4
5
启动start-yarn.sh脚本(命令:./start-yarn.sh)
[hadoopuser@iz9lk7cs77ry8dz:~/hadoop/sbin] $./start-yarn.sh
Starting resourcemanager
Starting nodemanagers

1
2
3
4
2.3 监控
按照以上步骤安装成功后,可以通过默认端口(9870)查看监控(http://127.0.0.1:9870)

至此Hadoop安装完成
--------------------- 
作者:弓虽长 
来源:CSDN 
原文:https://blog.csdn.net/zhongbangxing/article/details/84257752 

猜你喜欢

转载自my.oschina.net/u/4016971/blog/2961216