Three ways to start hadoop

 

  • the first way

    Start: Start HDFS and MapReduce respectively

    The command is as follows: start-dfs.sh  start-mapreted.sh

    The commands are as follows: stop-dfs.sh   stop-mapreted.sh

  • the second way

    start all or stop all

    start up:

    Command: start-all.sh

    Startup order: NameNode , DateNode , SecondaryNameNode , JobTracker , TaskTracker

     

    stop:

    Command: stop-all.sh

    Turn off sequentiality: JobTracker , TaskTracker , NameNode , DateNode , SecondaryNameNode

     

  • The third way to start

    Each daemon thread is started one by one, and the startup sequence is as follows:

    NameNode DateNode SecondaryNameNodeJobTrackerTaskTracker _ _

    The command is as follows:

    start up:

          hadoop-daemon.shdaemon ( daemon )

     

    hadoop-daemon.sh start purpose

    hadoop-daemon.sh start datenode

     

    hadoop-daemon.sh start secondarynamenode

    hadoop-daemon.sh start jobtracker

    hadoop-daemon.sh start tasktracker

     

    shutdown command:

    hadoop-daemon.sh stop tasktracker

     

    hadoop-daemons.sh  starts multiple processes

    The datanode and tasktracker will not be divided into multiple machines. Start from the node and use the

           Big data blockchain group: 518093343

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326187933&siteId=291194637