Hadoop2.7.3 单机版安装

安装

  • 这里下载源码并编译到 /app目录下
  • 配置Hadoop相关环境变量
    前提:配置了JAVA_HOME,主机名为hadoop101
配置hadoop_home
vim /etc/profile
export JAVA_HOME=/app/java/jdk1.8.0_131
export PATH=$JAVE_HOME/bin:$PATH
export CCLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib
export HADOOP_HOME=/app/hadoop-2.7.2
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
  • source /etc/profile
  • 进入Hadoop根目录,创建输入文件夹用于测试Map-Reduce程序
 cd  /app/hadoop-2.7.2
 mkdir input
 cp etc/hadoop/*.xml input
 bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar grep input      output 'dfs[a-z.]+'
 cat output/*
 mkdir wcinput cd wcinput   touch wc.input 
 hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount wcinput wcoutput
 cat wcoutput/part-r-00000
  • 进行Hadoop相关配置
 vim hadoop-env.sh
 export JAVA_HOME=/app/java/jdk1.8.0_131
vim core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
    <value>hdfs://hadoop101:9000</value>
</property>

<!-- 指定Hadoop运行时产生文件的存储目录 -->
<property>
        <name>hadoop.tmp.dir</name>
        <value>/app/hadoop-2.7.2/data/tmp</value>
</property>
vim hdfs-site.xml
<configuration>
<!-- 指定HDFS副本的数量 -->
<property>
        <name>dfs.replication</name>
        <value>1</value>
</property>
</configuration>

启动之前关闭 防火墙

  • 启动NameNode
 cd /app/hadoop-2.7.2  
 sbin/hadoop-daemon.sh start namenode
  • 启动DataNode
cd /app/hadoop-2.7.2  
sbin/hadoop-daemon.sh start datanode
  • Jps查看

  • web界面查看
    http://hadoop101:50070/dfshealth.html#tab-overview

  • hdfs上传文件

bin/hdfs dfs -mkdir -p /user/atguigu/input
bin/hdfs dfs -put wcinput/wc.input /user/atguigu/input/
  • 执行map-reduce
cd /app/hadoop-2.7.2 
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /user/atguigu/input/ /user/atguigu/output
# 查看输出
bin/hdfs dfs -cat /user/atguigu/output/*
# 下载输出
hdfs dfs -get /user/atguigu/output/part-r-00000 ./wcoutput/
# 删除结果
hdfs dfs -rm -r /user/atguigu/output

猜你喜欢

转载自blog.csdn.net/abc8125/article/details/109583604