运维项目实训—Hadoop安装与部署

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/Hannah_zh/article/details/81169416

Hadoop安装与部署

1.下载hadoop、jdk安装包到hadoop用户家目录
[root@server1 ~]# ls
hadoop-2.7.3.tar.gz  jdk-7u79-linux-x64.tar.gz
[root@server1 ~]# useradd -u 800 hadoop
[root@server1 ~]# id hadoop
uid=800(hadoop) gid=800(hadoop) groups=800(hadoop)
[root@server1 ~]# mv * /home/hadoop/
2.解压配置软链接(方便hadoop、jdk更新,只更改软链接即可)
[root@server1 ~]# su - hadoop
[hadoop@server1 ~]$ ls
hadoop-2.7.3.tar.gz  jdk-7u79-linux-x64.tar.gz
[hadoop@server1 ~]$ tar zxf jdk-7u79-linux-x64.tar.gz 
[hadoop@server1 ~]$ tar zxf hadoop-2.7.3.tar.gz 
[hadoop@server1 ~]$ ls
hadoop-2.7.3  hadoop-2.7.3.tar.gz  jdk1.7.0_79  jdk-7u79-linux-x64.tar.gz
[hadoop@server1 ~]$ ln -s jdk1.7.0_79/ java
[hadoop@server1 ~]$ ln -s hadoop-2.7.3 hadoop
3.配置java环境变量
[hadoop@server1 hadoop]$ pwd
/home/hadoop/hadoop/etc/hadoop
[hadoop@server1 hadoop]$ vim hadoop-env.sh 
 25 export JAVA_HOME=/home/hadoop/java
4.通过执行hadoop自带实例验证是否安装成功
将input中以dfs开头的文件输出到output目录,其中output目录不必创建
[hadoop@server1 hadoop]$ pwd
/home/hadoop/hadoop
[hadoop@server1 hadoop]$ mkdir input
[hadoop@server1 hadoop]$ cp etc/hadoop/* input/
[hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar grep input output 'dfs[a-z.]+'   
[hadoop@server1 hadoop]$ cat output/*
示图:output目录中以dfs开头的文件,则安装成功

这里写图片描述

猜你喜欢

转载自blog.csdn.net/Hannah_zh/article/details/81169416
今日推荐