Problems encountered in hadoop learning: hadoop refused to connect

Problems encountered in hadoop learning: hadoop refused to connect

After Hadoop is installed, enter the address http://192.168.29.134:9870 in the local browser, prompting to refuse the connection. I found a lot of related information on the Internet. There are many reasons. One is that the firewall is not turned off, the other is that the namenode is not started, the other is that the journalnode has not been started after the namenode is started, and the clusterID of the namenode and datanode is inconsistent, and the hadoopd is not deleted. Temporary files and so on. . . .
I revised them one by one based on the information on the Internet.

  • 1. At first, the firewall was not turned off , I used the command
 firewall- cmd --list-ports

Enter the above command, if there is nothing, then you have to turn off the firewall, the command is as follows:

systemctl stop firewalld.service

Reformat the namenode and restart hadoop

 ./stop-all,sh   
 ./start_all.sh`

It was still useless. . . .

  • 2. The second solution is to delete the files in the tmp directory, format again, and restart
查看进程
jps

I found that I have a SecondaryNameNode and a datanode, but there is no namenode. According to the Internet, I mean the namenode did not start successfully?
Well, the reason why the namenode did not start successfully may be because the temporary files were not deleted. Then I will go to Kangkang my core-site.xml, where the temporary files are allocated
Insert picture description here

cd /root/hadoop/tmp

Go to the directory, see what's inside,
Insert picture description here
delete it

rm -rf dfs

Restart hadoop, view the process, and jps commands. At
Insert picture description here
this time, the datanode is gone. If the browser visits hadoop, it still doesn't work! ! !
Let me find out how to start the datanode. . . . . emmmmmm

  • 3. I really don’t know where the problem is. Some netizens said to check the log .
cd /usr/hadoop/hadoop-3.1.2/logs

Insert picture description here
`
View these two files

 tail -200 hadoop-root-datanode-master.log

The file of datanode reported an error. What's wrong?

Call From master/192.168.128.135 to master:8485 failed on connection exception: java.net.ConnectException: Connection refused
to continue Baidu search, how to solve this problem

  • Fourth, Huangtian has paid off, and I finally found a solution to the problem! The reason is that the
    journalnode has not been started after the namenode is started, resulting in an error

https://www.cnblogs.com/tibit/p/7447190.html

In this article, he said that he started the cluster with start-all.sh, the journalnode (port 8485) was started after the namenode. By default, namenode starts 10s (maxRetries=10, sleepTime=1000) after journalnode has not started, and the above error will be reported. According to this method, in the end, I can finally access hadoop on the browser, sahhua~~

Guess you like

Origin blog.csdn.net/sinat_35803474/article/details/103795158