Three machines
master hadoop01
from hadoop02
from hadoop03
1. Hadoop decompression package
- 修改core-site.xml
cd hadoop-2.7.4/etc/hadoop/
vim core-site.xml
<configuration>
<!-- 指定NameNode的地址 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop01:9000</value>
</property>
<!-- 指定hadoop数据的存储目录 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/root/export/servers/hadoop-2.7.4/tmp</value>
</property>
</configuration>
- Modify hdfs-site.xml
vim hdfs-site.xml
<configuration>
<!-- 指定副本数量 -->
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<!-- web端访问地址 -->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop02:50090</value>
</property>
</configuration>
- 修改mapred-site.xml
cp mapred-site.xml.template mapred-site.xml
vim mapred-site.xml
<configuration>
<!-- 指定MapReduce程序运行在Yarn上 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
- 修改yarn-site.xml
cp mapred-site.xml.template mapred-site.xml
vim mapred-site.xml
<configuration>
<!-- 指定MR走shuffle -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!-- 指定ResourceManager的地址-->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop01</value>
</property>
</configuration>
Precautions
- add hostname
vim slaves
hadoop01
hadoop02
hadoop03
- Formatting is required before starting the cluster for the first time
hdfs namenode -format
Download and install Hadoop in the continuous update of more Hadoop articles