hadoop上传报错处理方式

2019-03-28 14:12:56,809 INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP-132490593-192.168.231.130-1550804841981:blk_1073741825_1001 took 43889ms (threshold=30000ms); ack: seqno: 288 reply: SUCCESS reply: SUCCESS downstreamAckTimeNanos: 34765497432 flag: 0 flag: 0, targets: [DatanodeInfoWithStorage[192.168.231.130:9866,DS-26d48414-b938-4682-95ab-09307534189d,DISK], DatanodeInfoWithStorage[192.168.231.132:9866,DS-af096564-e34d-414b-bd1e-9fdde52a2e72,DISK]]
2019-03-28 14:12:59,724 WARN hdfs.DataStreamer: Exception for BP-132490593-192.168.231.130-1550804841981:blk_1073741825_1001
java.io.IOException: Bad response ERROR for BP-132490593-192.168.231.130-1550804841981:blk_1073741825_1001 from datanode DatanodeInfoWithStorage[192.168.231.132:9866,DS-af096564-e34d-414b-bd1e-9fdde52a2e72,DISK]
	at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1126)
2019-03-28 14:13:06,022 WARN hdfs.DataStreamer: Error Recovery for BP-132490593-192.168.231.130-1550804841981:blk_1073741825_1001 in pipeline [DatanodeInfoWithStorage[192.168.231.130:9866,DS-26d48414-b938-4682-95ab-09307534189d,DISK], DatanodeInfoWithStorage[192.168.231.132:9866,DS-af096564-e34d-414b-bd1e-9fdde52a2e72,DISK]]: datanode 1(DatanodeInfoWithStorage[192.168.231.132:9866,DS-af096564-e34d-414b-bd1e-9fdde52a2e72,DISK]) is bad.
2019-03-28 14:13:29,494 WARN hdfs.DataStreamer: DataStreamer Exception
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[192.168.231.130:9866,DS-26d48414-b938-4682-95ab-09307534189d,DISK]], original=[DatanodeInfoWithStorage[192.168.231.130:9866,DS-26d48414-b938-4682-95ab-09307534189d,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
	at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1304)
	at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1372)
	at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1598)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1499)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1481)
	at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1256)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:667)
put: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[192.168.231.130:9866,DS-26d48414-b938-4682-95ab-09307534189d,DISK]], original=[DatanodeInfoWithStorage[192.168.231.130:9866,DS-26d48414-b938-4682-95ab-09307534189d,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
原因:
    无法写入;我的环境中有3个datanode,备份数量设置的是3。在写操作时,它会在pipeline中写3个机器。默认replace-datanode-on-failure.policy是DEFAULT,如果系统中的datanode大于等于3,它会找另外一个datanode来拷贝。目前机器只有3台,因此只要一台datanode出问题,就一直无法写入成功。
解决办法:
    修改hdfs-site.xml文件,添加或者修改如下两项:
    <property>
        <name>dfs.client.block.write.replace-datanode-on-failure.enable</name>         <value>true</value>
    </property>
    <property>
    <name>dfs.client.block.write.replace-datanode-on-failure.policy</name>
        <value>NEVER</value>
    </property>
    对于dfs.client.block.write.replace-datanode-on-failure.enable,客户端在写失败的时候,是否使用更换策略,默认是true没有问题
    对于,dfs.client.block.write.replace-datanode-on-failure.policy,default在3个或以上备份的时候,是会尝试更换结点尝试写入datanode。而在两个备份的时候,不更换datanode,直接开始写。对于3个datanode的集群,只要一个节点没响应写入就会出问题,所以可以关掉。

猜你喜欢

转载自blog.csdn.net/qq_38567039/article/details/88869029