步骤
环境:VMware, 镜像:ubuntu
配置JDK
1. 解压JDK( tar -zxvf (+jbk路径和名字) -C . )
2. 配置JDK的环境变量(用java -version查看是否配置成功)
命令:
1.vi ~/.bashrc
2.shift+ G 跳转至最后一行,增加
export JAVA_HOME =你的linux环境下的JDK的路径。
export JAVA_BIN= (上面的JDK路径+) /bin
PATH=$ PATH:$JAVA_HOME/bin
然后用命令 :
source ~/.bashrc (更新刚刚修改过的环境变量)
---------重点:最后用java -version查看是否配置信息成功
配置hadoop
3. 解压hadoop
tar -zxvf (+hadoop路径和名字) -C .
4. 建立软连接
ln -s (+hadoop的安装路径) hadoop
5. 配置环境变量
vi ~/.bashrc
1.倒数第二行插入:
export HADOOP_HOME= hadoop软连接路径
2.在PATH后追加:
: $HADOOP_HOME/bin: $HADOOP_HOME/sbin
然后用命令 :
source ~/.bashrc (更新刚刚修改过的环境变量)
6. 修改hadoop配置文件
core-site.xml文件:
<configuration>
<property>
<property>
<name>fs.defaultFS</name> //namenode地址
<value>hdfs://hadoopPD(本地名):9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name> //使用hadoop产生文件的存放目录
<value>file:/....(路径)</value>
</property>
</configuration>
hdfs-site.xml文件:
<configuration>
<property>
<name>dfs.nameservices</name>
<value>hadoop-cluster</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoopPD:50090</value>
</property>
<property>
<name>dfs.blocksize</name>
<value>32m</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/ll/software/data/hadoop/hdfs/nn</value>
</property>
<property>
<name>fs.checkpoint.dir</name>
<value>file:/home/ll/software/data/hadoop/hdfs/snn</value>
</property>
<property>
<name>fs.checkpoint.edits.dir</name>
<value>file:/home/ll/software/data/hadoop/hdfs/snn</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/ll/software/data/hadoop/hdfs/dn</value>
</property>
</configuration>
mapred-site.xml文件:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.application.classpath</name>
<value>/home/ll/software/hadoop-3.1.0/share/hadoop/mapreduce/*, /home/ll/software/hadoop-3.1.0/share/hadoop/mapreduce/lib/*</value>
</property>
</configuration>
yarn-site.xml文件:
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoopPD</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/home/ll/software/data/hadoop/yarn/nm</value>
</property>
</configuration>
7. 格式化,初始化文件系统
hdfs namenode -format
8. 启动集群
hdfs --daemon start namenode
hdfs --daemon start datanode
注意区分:hadoop dfs 与hdfs dfs的区别:
hadoop dfs:使用面较广,可以操作任何文件系统
hdfs dfs:只用操作HDFS文件系统相关