Summary of errors in running hadoop

1. After running, the 9000 port dedicated to hdfs is not started

     

Directory /tmp/hadoop-javoft/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible. Since the directory here is under tmp, recall that the files in the tmp directory are temporary files and will be It is deleted regularly, and it seems that the bug has surfaced. Then restart the computer and try to see if it is because of this. Before restarting, check the tmp directory to make sure that there should be directories after several format namenodes. After restarting, it is found that all of them have been deleted. After executing start-dfs.sh once, I saw that some directories were built under the tmp directory, but the dfs/name directory still did not exist. Some directories and files were built during start-dfs.sh. And dfs/name needs to be established when hadoop namenode -format. The problem is clear.

The solution is very simple, the location of these directories is determined according to the location of hadoop.tmp.dir, so just override the default value of hadoop.tmp.dir in conf/core-site.xml:

...
<property>
   <name>hadoop.tmp.dir</name>
   <value>/home/javoft/Documents/hadoop/hadoop-${user.name}</value>
   <description>A base for other temporary directories.</description>
</property>
...

So problem solved. . .

 

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326615155&siteId=291194637