HADOOP 伪分布式官方安装方法过程中的问题

按照hadoop官方文档http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html,安装hadoop3.2.0伪分布式模式过程中出现的问题,记录如下:

一、 ERROR: JAVA_HOME is not set and could not be found

解决方法:hadoop还是找不到JAVA_HOME,在etc/hadoop/hadoop-env.sh中添加或修改JAVA_HOME,一定要使用绝对路径。

二、bin/hdfs namenode -format时出现 错误:

Unable to determine local hostname -falling back to 'localhost'

java.net.UnknownHostException: zhaochao: zhaochao: 未知的名称或服务

解决方法:在etc/hosts中手动添加解析:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 zhaochao

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

三  sbin/start-dfs.sh时出现错误:

ERROR: Cannot set priority of namenode process 6017

原因:在core-site.xml中将default单词拼错了

解决方法:修改即可

四、sbin/start-dfs.sh时出现错误: 

java.io.FileNotFoundException: /etc/hadoop-3.2.0/logs/fairscheduler-statedump.log (权限不够)

解决方法:使用sudo命令执行sbin/start-dfs.sh

五、出现了错误:

ERROR: Attempting to operate on hdfs namenode as root

ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.

Starting datanodes

ERROR: Attempting to operate on hdfs datanode as root

ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.

ERROR: JAVA_HOME is not set and could not be found.

原因:HDFS需要知道本机上的hadoop是以哪个用户的身份在跑。

解决方法:在etc/hadoop/hadoop-env.sh中添加用户

export HDFS_NAMENODE_USER=zhaochao

export HDFS_DATANODE_USER=zhaochao

六、出现错误:java.io.FileNotFoundException: /etc/hadoop-3.2.0/logs/fairscheduler-statedump.log

原因:当前用户无法对hadoop文件夹进行操作

解决方法:sudo chown zhaochao:zhaochao -R hadoop-3.2.0

七、出现错误:

org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-zhaochao/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible。

原因:tmp文件被自动删除

解决方法:在core-site.xml中添加一个hadoop.tmp.dir的值,重新初始化一下hdfs。

八、无法访问50070端口,通过netstat查看,根本没有50070端口在监听

解决方法:在 hdfs-site.xml添加如下配置并重启dfs(sbin/stop-dfs.sh, sbin/start-dfs.sh):

<?xml version="1.0"?>

<configuration>

<property>

<name>dfs.replication</name>

<value>1</value>

</property>

<property>

  <name>dfs.http.address</name>

  <value>0.0.0.0:50070</value>

</property>

</configuration>

之后,整体的文件如下:

<?xml version="1.0"?>

<configuration>

<property>

<name>dfs.replication</name>

<value>1</value>

</property>

<property>

  <name>dfs.http.address</name>

  <value>0.0.0.0:50070</value>

</property>

</configuration>

猜你喜欢

转载自blog.csdn.net/ddsszzy/article/details/86681824
今日推荐