Trafodion Troubleshooting-Failed to replace a bad datanode on the existing pipeline

现象

安装EsgynDB在单个节点,在数据库初始化过程中报错,错误内容如下,

Create Library Manager: Started

*** ERROR[8458] Unable to access ExpLOBInterfaceInsert interface after retry. Call to ExpLOBInterfaceInsert returned error LOB_DATA_WRITE_ERROR_RETRY(517). Error detail: 0. Cause: java.io.IOException: IOException flush: java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[192.168.20.117:50010,DS-7705fe88-0f38-4cf5-afaa-96970779ccac,DISK]], original=[DatanodeInfoWithStorage[192.168.20.117:50010,DS-7705fe88-0f38-4cf5-afaa-96970779ccac,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:2532)
org.apache.hadoop.hdfs.DFSOutputStream.hflush(DFSOutputStream.java:2378)
org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:130)
org.trafodion.sql.HDFSClient.hdfsWriteImmediate(HDFSClient.java:567).

解决

网上搜索方案如下,
修改hdfs-site.xml文件,添加或者修改如下两项:

<property>
<name>dfs.client.block.write.replace-datanode-on-failure.enable</name>
<value>true</value>
</property>
<property>
<name>dfs.client.block.write.replace-datanode-on-failure.policy</name>
<value>NEVER</value>
</property>

添加以上参数并重启后发现错误依然存在,检查HDFS参数dfs.replication值为3,即设置为3副本。但因为只有单个节点,设置dfs.replication=1并重启HDFS,问题解决!

发布了352 篇原创文章 · 获赞 400 · 访问量 73万+

猜你喜欢

转载自blog.csdn.net/Post_Yuan/article/details/103742004