Linux cluster installation and set up Hadoop3.1.2 (7) - Cluster Configuration

Chapter 8: Cluster Configuration

8.1 Cluster Configuration

  1. Cluster deployment plan
hadoop104 hadoop105 hadoop106
HDFS NameNode DataNode DataNode SecondaryName NodeDataNode
YARN NodeManager ResourceManager NodeManager NodeManager
  1. Configure a cluster
    (1) core configuration file
    to configure core-site.xml
[zhangyong@hadoop104 hadoop]$ vi core-site.xml

Write the following in the configuration file

<!-- 指定HDFS中NameNode的地址 -->
<property>
	 <name>fs.defaultFS</name>
     <value>hdfs://hadoop104:9000</value>
</property>
<!-- 指定Hadoop运行时产生文件的存储目录 -->
<property>
	<name>hadoop.tmp.dir</name>
	<value>/opt/module/hadoop-3.1.2/data/tmp</value>
</property>
(2)HDFS配置文件

Configuration hadoop-env.sh

[zhangyong@hadoop104 hadoop]$ vi hadoop-env.sh
export JAVA_HOME=/opt/module/jdk1.8.0_181

Configuring hdfs-site.xml

[zhangyong@hadoop104 hadoop]$ vi hdfs-site.xml

Write the following in the configuration file

<property>
		<name>dfs.replication</name>
		<value>3</value>
</property>
<!-- 指定Hadoop辅助名称节点主机配置 -->
<property>
      <name>dfs.namenode.secondary.http-address</name>
      <value>hadoop106:50090</value>
</property>

(3) YARN profile
configuration yarn-env.sh

[zhangyong@hadoop104 hadoop]$ vi yarn-env.sh
export JAVA_HOME=/opt/module/jdk1.8.0_181

Located yarn-site.xml

[zhangyong@hadoop104 hadoop]$ vi yarn-site.xml

Increase follows the file

<!-- Reducer获取数据的方式 -->
<property>
		<name>yarn.nodemanager.aux-services</name>
		<value>mapreduce_shuffle</value>
</property>
<!-- 指定YARN的ResourceManager的地址 -->
<property>
		<name>yarn.resourcemanager.hostname</name>
		<value>hadoop105</value>
</property>

(4) MapReduce profile
configuration mapred-env.sh

[zhangyong@hadoop104 hadoop]$ vi mapred-env.sh
export JAVA_HOME=/opt/module/jdk1.8.0_181

Placed mapred-site.xml

[zhangyong@hadoop104 hadoop]$ vi mapred-site.xml

Increase follows the file

<!-- 指定MR运行在Yarn上 -->
<property>
		<name>mapreduce.framework.name</name>
		<value>yarn</value>
</property>

3. Good distribution on the cluster configuration Hadoop configuration file

[zhangyong@hadoop104 hadoop]$ xsync etc/

Here Insert Picture Description
4. Check the status of documentation

[zhangyong@hadoop103 hadoop]$ cat /opt/module/hadoop-3.1.2/etc/hadoop/core-site.xml

Cluster 8.2 single-point start

(1) If the cluster is the first time you start, you need to format NameNode

[zhangyong@hadoop104 hadoop-3.1.2]$ hdfs namenode -format

(2) Start NameNode on hadoop102

[zhangyong@hadoop104 hadoop-3.1.2]$ hadoop-daemon.sh start namenode
[zhangyong@hadoop104 hadoop-3.1.2]$ jps
3461 NameNode

(3) on each start DataNode hadoop104, hadoop105 and hadoop106

[zhangyong@hadoop104 hadoop-3.1.2]$ hadoop-daemon.sh start datanode
[zhangyong@hadoop104 hadoop-3.1.2]$ jps
3461 NameNode
3608 Jps
3561 DataNode
[zhangyong@hadoop105 hadoop-3.1.2]$ hadoop-daemon.sh start datanode
[zhangyong@hadoop105 hadoop-3.1.2]$ jps
3190 DataNode
3279 Jps
[zhangyong@hadoop106 hadoop-3.1.2]$ hadoop-daemon.sh start datanode
[zhangyong@hhadoop106 hadoop-3.1.2]$ jps
3237 Jps
3163 DataNode

http: // hadoop104: 9870 / dfshealth.html # tab-datanode add a linker script
to access the following description successful
Here Insert Picture Description
thinking: each time a node is started, if the number of nodes increased to 1000 how to do? The next chapter answers

Published 37 original articles · won praise 7 · views 1187

Guess you like

Origin blog.csdn.net/zy13765287861/article/details/104602508