Pseudo Lesson5 Hadoop distribution history server mode configuration, log files and configuration polymeric instructions

First, the server configuration history:

1. placed mapred-site.xml:

<!--历史服务器端地址-->
<property> 
	<name>mapreduce.jobhistory.address</name>
	<value>node1:10020</value> 
</property>

<!--历史服务器Web端地址-->
<property> 
	<name>mapreduce.jobhistory.webapp.address</name>
	<value>192.168.122.1:19888</value> 
</property>

 The host name changed to node1

 The 192.168.122.1 ip address change within the network, enter ifconfig to see

 After completion of the addition to the profile 

2. Start the history server: Enter the following command to start the service history

sbin/mr-jobhistory-daemon.sh start historyserver

  Enter jps can see that service is turned on

  

3. Review the history server: direct input http: // nodeb1: 19888 / jobhistory View

 For a complete run the program, you can click on the History view

 

Second, the log aggregation: 

1 Introduction: 

Log aggregation concept: After the completion of the application is running, the program will run log information uploaded to HDFS.

Log aggregation benefits: You can easily view the details of running, easy development and debugging.

Note: Enable the log aggregation function, you need to restart NodeManager, ResourceManager and HistoryManager.

2. Configure yarn-site.xml: Add the following

<!--日志聚合功能使能-->
<property> 
	<name>yarn.log-aggregation-enable</name>
	<value>true</value> 
</property>

<!--日志保留时间设置为七天-->
<property> 
	<name>yarn.log-aggregation.retain-seconds</name>
	<value>604800</value> 
</property>

3. restart the service:

 Close Service

sbin/mr-jobhistory-daemon.sh stop historyserver

sbin/yarn-daemon.sh stop nodemanager

sbin/yarn-daemon.sh stop resourcemanager


 Start Service

sbin/yarn-daemon.sh start nodemanager

sbin/yarn-daemon.sh start resourcemanager

sbin/mr-jobhistory-daemon.sh start historyserver

WordCount re-run the program:

hdfs dfs -rm -r /user/root/output

hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /user/root/input /user/root/output

4. View Log: to run to completion of the program click history, and then click on the logs, you can view the log

 

 Third, the profile Description:

Hadoop configuration file is divided into two categories: the default configuration files and custom profiles. Only when the user wants to modify one of the default configuration values, only need to modify the custom configuration file, change the appropriate attribute values.

1. The default configuration file:

To get the default file

Files stored in Hadoop 's jar package position

core-default.xml

hadoop-common-2.7.2.jar/core-default.xml

hdfs-default.xml

hadoop-hdfs-2.7.2.jar/hdfs-default.xml

yarn-default.xml

hadoop-yarn-common-2.7.2.jar/yarn-default.xml

mapred-default.xml

hadoop-mapreduce-client-core-2.7.2.jar/mapred-defaulr.xml

2. Customize the configuration file:

core-site.xml, hdfs-site.xml, yarn-site.xml, mapred-site.xml

Four configuration files are stored in $ HADOOP_HOME / etc / hadoop path, the user can be reconfigured according to the project requirements.

 

 

Published 45 original articles · won praise 30 · views 10000 +

Guess you like

Origin blog.csdn.net/sinat_40471574/article/details/104928876