hadoop study notes (2)

1. Hadoop component dependencies

2. Hadoop log format:

  Two kinds of logs, ending with out and log respectively:

  1 Logs ending with log : Logs recorded in the log4j logging format, using the daily rolling file suffix strategy to name the log files, and the content is relatively complete.

  2 Logs ending with out: Logs that record standard output and mark errors, with less content. By default, the system keeps the latest 5 log files.

  This can be configured in /etc/hadoop/hadoop-env.sh:

33 #Where log files are stored. $HADOOP_HOME/logs by default.
34 #export HADOOP_LOG_DIR = ${HADOOP_HOME}/logs

  Explanation of log names:

 

 

Second, hadoop start and stop:

  The first way:

  start up:

start-dfs.sh
start-mapred.sh(hadoop 2.x为 start-yarn.sh)

  stop:

stop-dfs.sh

stop-mapred.sh(Hadoop 2.x为 stop-yarn.sh)

  All start: 

start-all.sh

    Startup sequence : NameNode --> DataNode --> Secondary NameNode --> JobTracker --> TaskTracker

  stop all:

stop-all.sh

    Stop sequence : JobTracker --> TaskTracker --> NameNode --> DataNode --> Secondary NameNode

  The second way (the daemons are started and shut down one by one):

  start up:

1 hadoop-daemon.sh start namenode
2 hadoop-daemon.sh start datanoe
3 hadoop-daemon.sh start secondarynamenode
4 hadoop-daemon.sh start jobtracker
5 hadoop-daemon.sh start tasktracker

    The startup sequence is the same as above : NameNode --> DataNode --> Secondary NameNode --> JobTracker --> TaskTracker

  stop:

1 1 hadoop-daemon.sh stop jobtracker
2 2 hadoop-daemon.sh stop tasktracker
3 3 hadoop-daemon.sh stop namenode
4 4 hadoop-daemon.sh stop datanoe
5 5 hadoop-daemon.sh stop secondarynamenode

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324825865&siteId=291194637