idea提交job到yarn上运行,遇到一二三BUG记录

1.Connecting to ResourceManager at /0.0.0.0:8032

19/09/04 10:54:16 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
19/09/04 10:54:19 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
19/09/04 10:54:21 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
  • 产生原因:
    yarn-site.xml,默认的IP地址为0.0.0.0:8032
    需要手动配置
#配置文件中配置或java设置
<property>
		<name>yarn.resourcemanager.hostname</name>
		<value>hadoop001</value>
	</property>
	#java中设置
	configuration.set("yarn.resourcemanager.hostname", "hadoop001");

2.org.apache.hadoop.security.AccessControlException: Permission denied:

19/09/04 10:26:37 WARN security.UserGroupInformation: PriviledgedActionException as:T460 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Permission denied: user=T460, access=WRITE, inode="/tmp/hadoop-yarn":hadoop:supergroup:drwxr-xr-x
	at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:279)
	at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:260)
	at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:240)
	at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:162)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3885)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3868)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:3850)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6820)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4562)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4532)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4505)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:884)
	at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:328)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:641)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2281)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2277)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2275)
  • 产生原因:
    本地电脑的用户无权限操作linux上的文件

  • 解决方案

#java代码中设置提交用户或直接在idea中jvm中配置
System.setProperty("HADOOP_USER_NAME","hadoop");

3. No job jar file set. User classes may not be found. See Job or Job#setJar(String)

  • 产生原因:

提交到yarn运行的job,找不到相对应的jar

#指定jar所在位置(注意;这个是你在IDEA中,自己配置的jar编译之后存放的路径)
  job.setJar("D:\\IDEAspaces\\hdfs\\out\\artifacts\\hdfs_jar\\hdfs.jar");

4.could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.

19/09/04 14:05:51 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
19/09/04 14:06:12 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.net.ConnectException: Connection timed out: no further information
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
    at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:2008)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1715)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1668)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:790)
19/09/04 14:06:12 WARN hdfs.DFSClient: Abandoning BP-744454093-192.168.0.3-1567066072363:blk_1073741843_1019
19/09/04 14:06:12 WARN hdfs.DFSClient: Excluding datanode DatanodeInfoWithStorage[192.168.0.3:50010,DS-955a13a0-1285-465b-a7b2-005df4df159b,DISK]
19/09/04 14:06:12 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hadoop-yarn/staging/hadoop/.staging/job_1567577007137_0001/job.jar could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1719)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3508)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:694)
    at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:219)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:507)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2281)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2277)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2275)
  • 产生原因:
    本地外网访问namenode之后,返回的是内网的ip ;我们可以通过主机名来解决这个问题

  • 解决方案:

#第一步:在本地系统的host中设置相对应的主机名
xxx.xxxx.xxx.xxx  hadoop001
#第二步:在java代码中 配置
configuration.set("dfs.client.use.datanode.hostname","true");
#或者在配置文件 hdfs-site.xml中添加
<property>
        <name>dfs.datanode.use.datanode.hostname</name>
        <value>true</value>
    </property>
发布了33 篇原创文章 · 获赞 1 · 访问量 2583

猜你喜欢

转载自blog.csdn.net/weixin_44131414/article/details/100540780
今日推荐