Record the pit that the resourceManager of distributed deployment YARN fails to start successfully

     The distributed deployment of hadoop is on three machines: hadoop.node.server01, hadoop.node.server02, hadoop.node.server03, the nameNode is deployed in server01, and the resourceManager of YARN is deployed in server02 and configured on the server01 machine Good configuration files such as core-site.xml, hdfs-site.xml, mapred-site.xml, slave, etc.

      Configure passwordless login, scp the configuration file to other machines, and execute bin/hdfs namenode -format

       Quickly tap the start commands sbin/start-dfs.sh and sbin/start-yarn.sh on the server01 machine, and then use the jps command on each machine to check whether the service is normal. It turns out that the resourceManager is not displayed. I immediately realized whether there was a problem with the configuration file. I checked the configuration items, host name, etc. of all the previous configuration files. No problem was found. Later, I tried to start each machine one by one, and found that the resourceManager was normal in server02. Only then did I understand that the previous batch command such as start-yarn.sh cannot be started on the machine where the namenode is located, but must be It is normal to type this command in server02 where the resourceManager is located.

    

      

       

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325341811&siteId=291194637