EsgynDB Troubleshooting-Backup already exists

现象

EsgynDB中做备份集import导入的时候报错“Backup full20190702_00212428826064850102 already exists”,

SQL>import backup from location 'hdfs://172.31.234.16:8020/tmp/fulldb12parallel',tag 'full20190702_00212428826064850102';

*** ERROR[5050] IMPORT BACKUP command could not be completed. Reason: Error returned from exportOrImportBackup method. See next error for details. [2019-07-04 11:36:45]
*** ERROR[8448] Unable to access Hbase interface. Call to ExpHbaseInterface::exportOrImportBackup returned error HBASE_EXPORT_IMPORT_BACKUP_ERROR(727). Cause: java.io.IOException: Backup full20190702_00212428826064850102 already exists.
org.apache.hadoop.hbase.pit.BackupRestoreClient.importBackup(BackupRestoreClient.java:4378)
org.apache.hadoop.hbase.pit.BackupRestoreClient.exportOrImportBackup(BackupRestoreClient.java:4517). [2019-07-04 11:36:45]

解决

通过"get all backup tags"查看是否存储backup,发现不存在,

sqlci -> get all backup tags;

说明之前可能是有正在import的执行一半异常中断,import到一些中间目录未删除。
执行"hadoop fs -ls /user/trafodion/backupsys"发现目录下有backup结果集,并将其删除。

trafodion@cs02 ~]$ hadoop fs -ls /user/trafodion/backupsys
Found 2 items
drwxrwx---   - trafodion hbase          0 2019-07-03 09:46 /user/trafodion/backupsys/full20190702_00212428826064850102
[trafodion@cs02 ~]$ hadoop fs -rmr /user/trafodion/backupsys/full20190702_00212428826064850102
rmr: DEPRECATED: Please use 'rm -r' instead.
19/07/04 11:42:49 INFO fs.TrashPolicyDefault: Moved: 'hdfs://nameservice1/user/trafodion/backupsys/full20190702_00212428826064850102' to trash at: hdfs://nameservice1/user/trafodion/.Trash/Current/user/trafodion/backupsys/full20190702_00212428826064850102

删除之后继续报以下错误,

>import backup from location 'hdfs://172.31.234.16:8020/tmp/fulldb12parallel',tag 'full20190702_00212428826064850102';
cli: do_get_servers process type TMID err=0, num_servers=1

*** ERROR[5050] IMPORT BACKUP command could not be completed. Reason: Error returned from exportOrImportBackup method. See next error for details.

*** ERROR[8448] Unable to access Hbase interface. Call to ExpHbaseInterface::exportOrImportBackup returned error HBASE_EXPORT_IMPORT_BACKUP_ERROR(727). Cause: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: java.lang.Exception: doImport thread 120 FAILED with error:1 Error Detail: Java HotSpot(TM) 64-Bit Server VM warning: Using incremental CMS is deprecated and will likely be removed in a future release
The snapshot 'cf8492b79f8843b494ee491cefe146a0' already exists in the destination: hdfs://nameservice1/hbase/.hbase-snapshot/cf8492b79f8843b494ee491cefe146a0

根据以上错误提示,删除hbase目录下面的snapshot,

hadoop fs -rmr hdfs://nameservice1/hbase/.hbase-snapshot/cf8492b79f8843b494ee491cefe146a0

注:此方法同样适用于import执行一半的情况,

>>import backup from location 'hdfs://172.31.234.16:8020/tmp/full12backup',tag 'full4backup_00212429333584910646';

*** ERROR[5050] IMPORT BACKUP command could not be completed. Reason: Error returned from exportOrImportBackup method. See next error for details.

*** ERROR[8448] Unable to access Hbase interface. Call to ExpHbaseInterface::exportOrImportBackup returned error HBASE_EXPORT_IMPORT_BACKUP_ERROR(727). Cause: java.io.IOException: Backup full4backup_00212429333584910646 already imported.
org.apache.hadoop.hbase.pit.BackupRestoreClient.importBackup(BackupRestoreClient.java:4863)
org.apache.hadoop.hbase.pit.BackupRestoreClient.exportOrImportBackup(BackupRestoreClient.java:5001).

--- SQL operation failed with errors.

>>get all backup tags;

--- SQL operation complete.

[trafodion@cs02 ~]$ hadoop fs -ls /user/trafodion/backupsys 
Found 1 items
drwxrwx---   - trafodion hbase          0 2019-07-09 18:06 /user/trafodion/backupsys/full4backup_00212429333584910646
[trafodion@cs02 ~]$ 
[trafodion@cs02 ~]$ 
[trafodion@cs02 ~]$ hadoop fs -rmr /user/trafodion/backupsys/full4backup_00212429333584910646
rmr: DEPRECATED: Please use 'rm -r' instead.
19/07/09 20:48:42 INFO fs.TrashPolicyDefault: Moved: 'hdfs://nameservice1/user/trafodion/backupsys/full4backup_00212429333584910646' to trash at: hdfs://nameservice1/user/trafodion/.Trash/Current/user/trafodion/backupsys/full4backup_00212429333584910646

另外,有时候我们即使删除了hdfs://nameservice1/hbase/.hbase-snapshot和/user/trafodion/backupsys下面的内容之后可能仍然报错如下,

SQL>import backup from location 'hdfs://10.19.41.29:8020/tmp/chenlong', tag 'fb_test01_00212438653245090402';

*** ERROR[5050] IMPORT BACKUP command could not be completed. Reason: Error returned from exportOrImportBackup method. See next error for details. [2019-10-24 15:29:33]
*** ERROR[8448] Unable to access Hbase interface. Call to ExpHbaseInterface::exportOrImportBackup returned error HBASE_EXPORT_IMPORT_BACKUP_ERROR(727). Cause: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: java.lang.Exception: doImport thread 943 FAILED with error:1 Error Detail: Java HotSpot(TM) 64-Bit Server VM warning: Using incremental CMS is deprecated and will likely be removed in a future release
19/10/24 15:29:32 INFO snapshot.ExportSnapshot: Copy Snapshot Manifest
19/10/24 15:29:32 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /hbase/.hbase-snapshot/.tmp/8a77abfae7f840b932784cbdec540991/.snapshotinfo (inode 9960793): File does not exist. Holder DFSClient_NONMAPREDUCE_418811023_1 does not have any open files.
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3755)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3556)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3412)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:688)
        at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:217)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:506)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apac

解决方法:重启YARN!!!因为import/export这些动作均执行mapreduce。

发布了352 篇原创文章 · 获赞 400 · 访问量 73万+

猜你喜欢

转载自blog.csdn.net/Post_Yuan/article/details/94618707
今日推荐