IDEA开发Spark提示failed to locate the winutils binary in the hadoop binary

转自: https://blog.csdn.net/Utopia_1919/article/details/52451952

今天整理电脑删了一些没用的东西,回过头开发spark的时候发现spark提示错误:

16/09/06 17:20:43 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable D:\hadoop-2.6.4\bin\winutils.exe in the Hadoop binaries.
    at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:355)
    at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:370)
    at org.apache.hadoop.util.Shell.<clinit>(Shell.java:363)
    at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:79)
    at org.apache.hadoop.security.Groups.parseStaticMapping(Groups.java:104)
    at org.apache.hadoop.security.Groups.<init>(Groups.java:86)
    at org.apache.hadoop.security.Groups.<init>(Groups.java:66)
    at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
    at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:271)
    at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:248)
    at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:763)
    at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:748)
    at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:621)
    at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2160)
	at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2160)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2160)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:322)
    at lalalallaal$.main(lalalallaal.scala:9)
    at lalalallaal.main(lalalallaal.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
16/09/06 17:20:44 INFO SecurityManager: Changing view acls to: Utopia
16/09/06 17:20:44 INFO SecurityManager: Changing modify acls to: Utopia

解决后写一篇经验文章。

首先我们需要明白,hadoop只能运行在linux环境下,如果我们在windows下用idea开发spark的时候底层比方说文件系统这些方面调用hadoop的时候是没法调用的,这也就是为什么会提示这样的错误。 
当我们有这样的错误的时候,其实还是可以使用spark计算框架的,不过当我们使用saveAsTextFile的时候会提示错误,这是因为spark使用了hadoop上hdfs那一段的程序,而我们windows环境下没有hadoop,怎么办?

解决方法: 
第一步: 官网下载相应版本的hadoop。 
第二步:解压到你想要安装的任何路径,解压过程会提示出现错误,不去管他,这是因为linux文件不支持windows。 
第三步:设置环境变量,在系统变量中添加HADOOP_HOME,指向你解压的文件路径。然后再path中添加%HADOOP_HOME%bin和%HADOOP_HOME%sbin 
第四步:找一找可以使用的重新编译的winutils兼容工具插件包,这个可以在这里下载: 
http://download.csdn.net/detail/utopia_1919/9623357 
第五步:下载完以后在我们hadoop文件夹中替换下载包中的两个目录。

回到idea会发现bug完美解决。

猜你喜欢

转载自blog.csdn.net/qq_32649581/article/details/88693690