Hadoop cluster startup and shutdown command step

Hadoop cluster startup and shutdown command steps can be summarized:

1. On Start Start NameNode hadoop-daemon.sh Master.
2. Start hadoop-daemon.sh start datanode on Slave.
3. Results of observation by jps instruction.
4. hdfs dfsadmin -report observed cluster configuration.
5 by http: // npfdev1:. 50070 interface to observe the operation of the cluster (If you have problems watching  https://www.cnblogs.com/zlslch/p/6604189.html  )
6. with hadoop-daemon.sh stop .. Hand down the cluster.

 

 

  1. Close the NodeManager , the ResourceManager and HistoryManager

[atguigu@hadoop101 hadoop-2.7.2]$ sbin/yarn-daemon.sh stop resourcemanager

[atguigu@hadoop101 hadoop-2.7.2]$ sbin/yarn-daemon.sh stop nodemanager

[atguigu@hadoop101 hadoop-2.7.2]$ sbin/mr-jobhistory-daemon.sh stop historyserver

  1. Start the NodeManager , the ResourceManager and HistoryManager

[atguigu@hadoop101 hadoop-2.7.2]$ sbin/yarn-daemon.sh start resourcemanager

[atguigu@hadoop101 hadoop-2.7.2]$ sbin/yarn-daemon.sh start nodemanager

[atguigu@hadoop101 hadoop-2.7.2]$ sbin/mr-jobhistory-daemon.sh start historyserver

  1. Delete HDFS on existing output file

[atguigu@hadoop101 hadoop-2.7.2]$ bin/hdfs dfs -rm -R /user/atguigu/output

Guess you like

Origin www.cnblogs.com/wen-/p/12113049.html