hadoop-spark错误集

一、spark错误

Failed to initialize compiler: object java.lang.Object in compiler mirror not found.
** Note that as of 2.8 scala does not assume use of the java classpath.
** For the old behavior pass -usejavacp to scala, or if using a Settings
** object programmatically, settings.usejavacp.value = true.

Failed to initialize compiler: object java.lang.Object in compiler mirror not found.
** Note that as of 2.8 scala does not assume use of the java classpath.
** For the old behavior pass -usejavacp to scala, or if using a Settings
** object programmatically, settings.usejavacp.value = true.
Exception in thread "main" java.lang.NullPointerException

错误原因:java-jdk不兼容的缘故

解决方案:java1.8的版本能完美运行

二、spark错误

16/06/27 19:36:34 WARN Utils: Service ‘sparkDriver’ could not bind on port 0. Attempting port 1. 
16/06/27 19:36:34 WARN Utils: Service ‘sparkDriver’ could not bind on port 0. Attempting port 1. 
16/06/27 19:36:34 WARN Utils: Service ‘sparkDriver’ could not bind on port 0. Attempting port 1. 
16/06/27 19:36:34 WARN Utils: Service ‘sparkDriver’ could not bind on port 0. Attempting port 1. 
16/06/27 19:36:34 WARN Utils: Service ‘sparkDriver’ could not bind on port 0. Attempting port 1. 
16/06/27 19:36:34 WARN Utils: Service ‘sparkDriver’ could not bind on port 0. Attempting port 1. 
16/06/27 19:36:34 ERROR SparkContext: Error initializing SparkContext. 
java.net.BindException: 无法指定被请求的地址: Service ‘sparkDriver’ failed after 16 retries!

错误原因:hosts文件中的master名ip与主机不符合

解决方案:更改为ipconfig命令所显示的ip

 三、hadoop中datanode无法启动

         1.普通情况:原来能启动,但一次停止集群后再启动就启动不了了

             错误原因:hadoop启动不成功

             解决方案:集群启动后使用 hadoop-daemon.sh start datanode命令单独启动

         2.

2016-07-17 21:22:14,616 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000. Exiting.
java.io.IOException: Incompatible clusterIDs in /usr/local/hadoop/tmp/dfs/data: namenode clusterID = CID-fd069c99-8004-47e1-9f67-a619bf4e9b60; datanode clusterID = CID-9a628355-6954-473b-a66c-d34d7c2b3805

               错误原因:当我们执行文件系统格式化时,会在namenode数据文件夹(即配置文件中dfs.namenode.name.dir在本地系统的路径)中保存一个current/VERSION文件,记录clusterID,标识了所格式化的 namenode的版本。如果我们频繁的格式化namenode,那么datanode中保存(即配置文件中dfs.data.dir在本地系统的路径)的current/VERSION文件只是你第一次格式化时保存的namenode的ID,因此就会造成datanode与namenode之间的id不一致。

                  解决方案:方法1.把配置文件中dfs.datanode.data.dir在本地系统的路径下的current/VERSION中的clusterID改为与namenode一样。重启即可!

                                     方法2. 把/tmp下的Hadoop开关的临时文件删除把/hadoop.tmp.dir目录清空

 四、Hadoop中namenode无法启动

         1.普通情况:新装集群启动,发现namenode无法启动

             错误原因:namenode的数据存储目录没有创建

              解决方案:创建namenode数据存储目录并给与其执行权限

        2.与第三条错误有关,删除namenode的目录后,在启动集群会发现namenode无法启动

            错误原因:hdfs没有格式化的缘故

             解决方案:使用hadoop namenode -format命令将hdfs格式化

五、待续

           

猜你喜欢

转载自blog.csdn.net/qq_21467113/article/details/86688276