Hadoop错误之 /bin/bash: /bin/java: No such file or directory

莫名的错误日志如下,该错误的产生原因大概可能由于重装hadoop环境有关

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/Users/liuxunming/MyConfigure/hadoop-2.7.4/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/Users/liuxunming/MyConfigure/apache-hive-2.3.1-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
18/01/05 14:17:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/01/05 14:17:29 INFO client.RMProxy: Connecting to ResourceManager at /127.0.0.1:8032
18/01/05 14:17:30 INFO input.FileInputFormat: Total input paths to process : 1
18/01/05 14:17:30 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library from the embedded binaries
18/01/05 14:17:30 INFO lzo.LzoCodec: Successfully loaded & initialized native-lzo library [hadoop-lzo rev f1deea9a313f4017dd5323cb8bbb3732c1aaccc5]
18/01/05 14:17:30 INFO mapreduce.JobSubmitter: number of splits:1
18/01/05 14:17:30 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1515042904174_0004
18/01/05 14:17:30 INFO impl.YarnClientImpl: Submitted application application_1515042904174_0004
18/01/05 14:17:30 INFO mapreduce.Job: The url to track the job: http://liuxunmingdeMacBook-Pro.local:8088/proxy/application_1515042904174_0004/
18/01/05 14:17:30 INFO mapreduce.Job: Running job: job_1515042904174_0004
18/01/05 14:17:33 INFO mapreduce.Job: Job job_1515042904174_0004 running in uber mode : false
18/01/05 14:17:33 INFO mapreduce.Job:  map 0% reduce 0%
18/01/05 14:17:33 INFO mapreduce.Job: Job job_1515042904174_0004 failed with state FAILED due to: Application application_1515042904174_0004 failed 2 times due to AM Container for appattempt_1515042904174_0004_000002 exited with  exitCode: 127
For more detailed output, check application tracking page:http://liuxunmingdeMacBook-Pro.local:8088/cluster/app/application_1515042904174_0004Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1515042904174_0004_02_000001
Exit code: 127
Stack trace: ExitCodeException exitCode=127: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:585)
    at org.apache.hadoop.util.Shell.run(Shell.java:482)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:776)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)


Container exited with a non-zero exit code 127
Failing this attempt. Failing the application.
18/01/05 14:17:33 INFO mapreduce.Job: Counters: 0

以上是错误日志,核心的内容有两句日志:

Stack trace: ExitCodeException exitCode=127
For more detailed output, check application tracking page:http://liuxunmingdeMacBook-Pro.local:8088/cluster/app/application_1515042904174_0004Then, click on links to logs of each attempt.

我们以第一句去网络上搜索相关答案,发现搜到的无外乎以下几种解决方案:
1,更改hadoop-env.sh中的java_home环境变量,如这里
2,更改yarn-site.xml中的yarn.application.classpath之类的classpath,如这里
3,添加mac中java软连接,如这里,以及链接4
试了以上几种方法后依然无法解决问题,然后在链接4的评论下又看到一篇文章,说能真正解决这个问题,因为以前遇到过这个问题,依稀记得当时是重启电脑输入什么命令后解决的,然后点进去链接看到文章里果然有我的评论,所以在此仅以此文记录下这个坑,以后再犯好找解决方法

以下是解决方法,参考这里
1,重启电脑,进入恢复模式,打开终端
2,输入以下命令,禁用SIP特性,SIP特性大概是指即使拥有sudo权限也无法修改系统级目录的权限,默认为启用状态

csrutil disable

3,重启电脑,进入正常模式,打开终端
在输入建立java软连接的命令

sudo ln -s /usr/bin/java /bin/java

输入系统密码就OK了,到此,此错误成功解决

后续:由于担心disable了csrutil会对系统本身有什么影响,于是打算再次开启,测试流程如下
先重启电脑,再运行job,第一次失败,提示什么safe mode,第二次成功
再重启电脑,进入恢复模式,启用csrutil

csrutil enable
successfully enabled system integrity protection,please restart the machine for the change to take effect 

再重启电脑,再运行job,第一次失败,提示什么safe mode,第二次成功
至此,得出hadoop环境已经正常。

猜你喜欢

转载自blog.csdn.net/diyangxia/article/details/78982867
今日推荐