Call From s0/192.168.56.140 to s0:8020 failed on connection exception

ubuntu @ s0: ~ $ hadoop fs -ls /

ls: Call From s0/192.168.56.140 to s0:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

At first, I thought that the port of s0 was not open, but after checking it, I found that the firewall was not enabled at all.

Later, the core-site.xml configuration on all clusters was  changed from hdfs://s0/ to hdfs://s0:8020/. and then reboot

Then executed hadoop namenode -format on s0

Execute hadoop fs -ls / again, and found that the same error was reported.

Execute the jsp command again, but the process of Namenode is not found.

Then execute start-all.sh. Namenode is running.

Finally, execute hadoop fs -ls / and finally it works.

 

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
	<property>
		<name>fs.defaultFS</name>
		<value>hdfs://s0:8020/</value>
	</property>
</configuration>

 In addition, use the jps command to check whether the namenode has been started. If start-all.sh cannot be started, execute the hadoop namenode command separately.

I get an error when I execute:

org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-ubuntu/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.

 

Directory does not exist? But why did it exist when it was just installed? With the attitude of trying it out, execute it again.

hadoop purpose -format

After execution, start the namenode successfully, indicating that the directory that does not exist above was rebuilt when formatting namenode, but if you have to format namenode every time you restart, it is too unreliable, not only troublesome, but also more The trouble is that the data inside cannot always be mercilessly deleted, the problem must be solved.

Directory /tmp/hadoop-javoft/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible. Since the directory here is under tmp, recall that the files in the tmp directory are temporary files and will be It is deleted regularly, and it seems that the bug has surfaced. Then restart the computer and try to see if it is because of this. Before restarting, check the tmp directory to make sure that there should be directories after several format namenodes. After restarting, it is found that all of them have been deleted. After executing start-dfs.sh once, I saw that some directories were built under the tmp directory, but the dfs/name directory still did not exist. Some directories and files were built during start-dfs.sh. And dfs/name needs to be established when hadoop namenode -format. The problem is clear.

The solution is very simple, the location of these directories is determined according to the location of hadoop.tmp.dir, so just override the default value of hadoop.tmp.dir in conf/core-site.xml:

...
<property>
   <name>hadoop.tmp.dir</name>
   <value>/home/ubuntu/Documents/hadoop/hadoop-${user.name}</value>
   <description>A base for other temporary directories.</description>
</property>
...

So problem solved. . .

 

 

 

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326391350&siteId=291194637