Configure jobhistory
vi mapred-site.xml
Add the following code
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop100:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop100:19888</value>
</property>
Start jobhistory
sbin/mr-jobhistory-daemon.sh start historyserver
If it is closed, it is changed to stop
jps to see if it is started, if you see the JobHistory process, it is considered a
success. After success, let’s test it.
First create a test folder on HDFS
hdfs dfs -mkdir /test
Upload README.txt
hdfs dfs -put 路径/README.txt /test
Go to 50070 to see if it is uploaded.
After success, come back and execute the woutcount example
hadoop jar share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.6.0-cdh5.14.2.jar wordcount /test/README.txt /wcOutput
Go back to look at HDFS (port number: 50070),
look at Yarn (port number: 8088), and
finally look at JobHistory (port number: 19888).
In JobHistory we can see each submitted task (Job), all Job records will be seen here