hadoop error - Exception in thread "main" ExitCodeException exitCode=1: chmod: cannot access without that file or directory

I. Introduction

        When the author performed a simple API test in the newly installed Hadoop cluster, an exception occurred in IDEA, and the file path could not be accessed, and there was no such file or directory. Before that, when the author imported HDFS data into Hbase, this exception also occurred. This exception can be simply regarded as a permission problem, but the problem that caused the exception is quite different.

Two, abnormal

1. Abnormal one

        Use idea to write an API, read HDFS files and then perform simple statistics on exceptions

23/04/15 01:35:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
23/04/15 01:35:22 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
23/04/15 01:35:22 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
23/04/15 01:35:22 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
23/04/15 01:35:22 INFO mapreduce.JobSubmitter: Cleaning up the staging area file:/export/servers/hadoop-2.7.4/tmpha/mapred/staging/root2047142323/.staging/job_local2047142323_0001
Exception in thread "main" ExitCodeException exitCode=1: chmod: 无法访问 '/export/servers/hadoop-2.7.4/tmpha/mapred/staging/root2047142323/.staging/job_local2047142323_0001': 没有那个文件或目录

	at org.apache.hadoop.util.Shell.runCommand(Shell.java:585)
	at org.apache.hadoop.util.Shell.run(Shell.java:482)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:776)
	at org.apache.hadoop.util.Shell.execCommand(Shell.java:869)
	at org.apache.hadoop.util.Shell.execCommand(Shell.java:852)
	at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:733)
	at org.apache.hadoop.fs.ChecksumFileSystem$1.apply(ChecksumFileSystem.java:505)
	at org.apache.hadoop.fs.ChecksumFileSystem$FsOperation.run(ChecksumFileSystem.java:486)
	at org.apache.hadoop.fs.ChecksumFileSystem.setPermission(ChecksumFileSystem.java:508)
	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:602)
	at org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:94)
	at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:95)
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:190)
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
	at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
	at com.itcast.hadoop.example.dedup.DedupDriver.main(DedupDriver.java:48)

Process finished with exit code 1

        Here, the author preliminarily thinks that the exception caused by the imported configuration file is caused by importing the hbase configuration file hbase-site.xml and hadoop configuration files core-site.xml and log4j.properties under the resources directory of IDEA Abnormal, the program can run normally without importing the configuration file.

2. Abnormal Two

        Previously, the same exception occurred when the author wrote a program to read file data in HDFS and import it into hbase in a cluster with hadoop version 2.10.1 and hbase version 2.4.11.

23/04/12 02:51:22 INFO mapreduce.JobSubmitter: Cleaning up the staging area file:/export/servers/hadoop-2.10.1/tmpha/mapred/staging/huanganchi1077728205/.staging/job_local1077728205_0001
Exception in thread "main" ExitCodeException exitCode=1: chmod: 无法访问 '/export/servers/hadoop-2.10.1/tmpha/mapred/staging/huanganchi1077728205/.staging/job_local1077728205_0001': 没有那个文件或目录

	at org.apache.hadoop.util.Shell.runCommand(Shell.java:998)
	at org.apache.hadoop.util.Shell.run(Shell.java:884)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1216)
	at org.apache.hadoop.util.Shell.execCommand(Shell.java:1310)
	at org.apache.hadoop.util.Shell.execCommand(Shell.java:1292)
	at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:770)
	at org.apache.hadoop.fs.ChecksumFileSystem$1.apply(ChecksumFileSystem.java:506)
	at org.apache.hadoop.fs.ChecksumFileSystem$FsOperation.run(ChecksumFileSystem.java:487)
	at org.apache.hadoop.fs.ChecksumFileSystem.setPermission(ChecksumFileSystem.java:503)
	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:725)
	at org.apache.hadoop.mapreduce.JobResourceUploader.mkdirs(JobResourceUploader.java:648)
	at org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:167)
	at org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:128)
	at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:101)
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
	at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570)
	at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926)
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567)
	at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588)
	at com.itcast.hbase.hbase_hdfs_data.hdfs_data_Runner.run(hdfs_data_Runner.java:41)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
	at com.itcast.hbase.hbase_hdfs_data.hdfs_data_Runner.main(hdfs_data_Runner.java:50)

        Comparing the two exceptions, it can be found that the file path cannot be accessed, but the exception problem is quite different. After reading the article on this exception, it can be roughly divided into modifying hadoop configuration files and adding permission users, because most The article believes that the cause of this problem is the lack of permissions, which makes it impossible to access the file path.

        Here the author has done three tests successively, and two other abnormalities appeared respectively.

(1) Test one

        Import the configuration file modified when configuring the hadoop cluster in the resources directory of IDEA, and an exception is reported during runtime, lacking permissions.

23/04/13 12:55:22 INFO common.X509Util: Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
23/04/13 12:55:22 INFO zookeeper.ClientCnxnSocket: jute.maxbuffer value is 1048575 Bytes
23/04/13 12:55:22 INFO zookeeper.ClientCnxn: zookeeper.request.timeout value is 0. feature enabled=false
23/04/13 12:55:22 INFO zookeeper.ClientCnxn: Opening socket connection to server hadoop02/192.168.8.202:2181.
23/04/13 12:55:22 INFO zookeeper.ClientCnxn: SASL config status: Will not attempt to authenticate using SASL (unknown error)
23/04/13 12:55:22 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /192.168.8.117:37152, server: hadoop02/192.168.8.202:2181
23/04/13 12:55:22 INFO zookeeper.ClientCnxn: Session establishment complete on server hadoop02/192.168.8.202:2181, session id = 0x200006cfbc50006, negotiated timeout = 40000
Exception in thread "main" org.apache.hadoop.security.AccessControlException: Permission denied: user=huanganchi, access=EXECUTE, inode="/tmp":root:supergroup:drwx------
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:350)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:311)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:238)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:541)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1705)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1723)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:642)
	at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:110)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:2966)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1160)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:880)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1003)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:931)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2854)

	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1675)
	at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1524)
	at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1521)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1521)
	at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:135)
	at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:112)
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:150)
	at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570)
	at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926)
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567)
	at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588)
	at com.itcast.hbase.hbase_hdfs_Mapreduce.hdfs_data_Driver.main(hdfs_data_Driver.java:31)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=huanganchi, access=EXECUTE, inode="/tmp":root:supergroup:drwx------
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:350)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:311)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:238)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:541)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1705)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1723)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:642)
	at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:110)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:2966)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1160)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:880)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1003)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:931)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2854)

	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1549)
	at org.apache.hadoop.ipc.Client.call(Client.java:1495)
	at org.apache.hadoop.ipc.Client.call(Client.java:1394)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
	at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:800)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
	at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1673)
	... 15 more

Process finished with exit code 1

(2) Test 2 && Test 3

       

这里说明以下,测试二和测试三都是建立在测试一基础上实施的
测试二是在代码添加以下内容
        //设置参数,指定要访问的文件系统的类型:HDFS文件系统
        conf.set("fs.defaultFS","hdfs://spark01:9000");
        //设置客户端的访问身份,以root身份访问HDFS
        System.setProperty("HADOOP_USER_NAME","root");
        //通过FileSystem类的静态方法,获取文件系统客户端对象
        fs = FileSystem.get(conf);

测试三则是对拒绝访问的目录开放权限
hadoop fs -chmod 777 /xxxx

报出相同异常如下
23/04/13 19:56:30 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
23/04/13 19:56:31 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
23/04/13 19:56:32 WARN mapreduce.JobResourceUploader: No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
23/04/13 19:56:32 INFO input.FileInputFormat: Total input files to process : 1
23/04/13 19:56:33 INFO mapreduce.JobSubmitter: number of splits:1
23/04/13 19:56:35 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1681322847291_0001
23/04/13 19:56:35 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources.
23/04/13 19:56:35 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/root/.staging/job_1681322847291_0001
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.yarn.util.Apps.setEnvFromInputProperty(Ljava/util/Map;Ljava/lang/String;Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;Ljava/lang/String;)V
	at org.apache.hadoop.mapreduce.v2.util.MRApps.setEnvFromInputProperty(MRApps.java:716)
	at org.apache.hadoop.mapred.YARNRunner.setupContainerLaunchContextForAM(YARNRunner.java:536)
	at org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:583)
	at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:324)
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:253)
	at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570)
	at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926)
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567)
	at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588)
	at com.itcast.hadoop.wordcount.WordCountDriver.main(WordCountDriver.java:48)

        Regarding this exception, the article I consulted did not provide a good explanation, so I can only temporarily attribute it to the exception caused by the version problem.

 

Guess you like

Origin blog.csdn.net/weixin_63507910/article/details/130164883