Use carbondata DataMap problems encountered in the process

Exceptions, create datamap, throws the following exception in loading data

       at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.ExecutionException: org.apache.carbondata.processing.datamap.DataMapWriterException: java.io.IOException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to CREATE_FILE /carbon/spark2.3.2/default/eve_not_partition_tbl_830f7cfd-16b6-4e30-b237-bbc74ea9e1d2/eve_not_partition_tbl_bloomfilter_dm/2/0_batchno0-0-2-1576672548810/src_ip.bloomindex for DFSClient_NONMAPREDUCE_193894307_1 on 10.10.151.15 because DFSClient_NONMAPREDUCE_193894307_1 is already the current lease holder.
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:3140)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2813)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2702)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2586)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:736)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:409)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)

        at java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.util.concurrent.FutureTask.get(FutureTask.java:192)
        at org.apache.carbondata.processing.loading.steps.CarbonRowDataWriterProcessorStepImpl.execute(CarbonRowDataWriterProcessorStepImpl.java:143)
        ... 12 more
Caused by: org.apache.carbondata.processing.datamap.DataMapWriterException: java.io.IOException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to CREATE_FILE /carbon/spark2.3.2/default/eve_not_partition_tbl_830f7cfd-16b6-4e30-b237-bbc74ea9e1d2/eve_not_partition_tbl_bloomfilter_dm/2/0_batchno0-0-2-1576672548810/src_ip.bloomindex for DFSClient_NONMAPREDUCE_193894307_1 on 10.10.151.15 because DFSClient_NONMAPREDUCE_193894307_1 is already the current lease holder.
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:3140)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2813)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2702)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2586)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:736)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:409)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)

        at org.apache.carbondata.processing.datamap.DataMapWriterListener.register(DataMapWriterListener.java:107)
        at org.apache.carbondata.processing.datamap.DataMapWriterListener.registerAllWriter(DataMapWriterListener.java:82)
        at org.apache.carbondata.processing.store.CarbonFactDataHandlerModel.createCarbonFactDataHandlerModel(CarbonFactDataHandlerModel.java:301)
        at org.apache.carbondata.processing.loading.steps.CarbonRowDataWriterProcessorStepImpl.doExecute(CarbonRowDataWriterProcessorStepImpl.java:163)
        at org.apache.carbondata.processing.loading.steps.CarbonRowDataWriterProcessorStepImpl.access$000(CarbonRowDataWriterProcessorStepImpl.java:57)
        at org.apache.carbondata.processing.loading.steps.CarbonRowDataWriterProcessorStepImpl$DataWriterRunnable.run(CarbonRowDataWriterProcessorStepImpl.java:331)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        ... 3 more
Caused by: java.io.IOException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to CREATE_FILE /carbon/spark2.3.2/default/eve_not_partition_tbl_830f7cfd-16b6-4e30-b237-bbc74ea9e1d2/eve_not_partition_tbl_bloomfilter_dm/2/0_batchno0-0-2-1576672548810/src_ip.bloomindex for DFSClient_NONMAPREDUCE_193894307_1 on 10.10.151.15 because DFSClient_NONMAPREDUCE_193894307_1 is already the current lease holder.
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:3140)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2813)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2702)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2586)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:736)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:409)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)

        at org.apache.carbondata.datamap.bloom.AbstractBloomDataMapWriter.initDataMapFile(AbstractBloomDataMapWriter.java:172)
        at org.apache.carbondata.datamap.bloom.AbstractBloomDataMapWriter.<init>(AbstractBloomDataMapWriter.java:63)
        at org.apache.carbondata.datamap.bloom.BloomDataMapWriter.<init>(BloomDataMapWriter.java:55)
        at org.apache.carbondata.datamap.bloom.BloomCoarseGrainDataMapFactory.createWriter(BloomCoarseGrainDataMapFactory.java:214)
        at org.apache.carbondata.processing.datamap.DataMapWriterListener.register(DataMapWriterListener.java:104)
        ... 10 more

Abnormal Second, the index is created, but when our sql query with the indexed columns to return empty data, the actual data is there.

Conclusion: In my conclusion is that due to the current test when you create the table does not specify the sort column, note: Be sure to add a column to sort, be sure to add a column to sort, be sure to add a column to sort, say three important things all over.

If you have questions, please contact me qq394023466

Published 94 original articles · won praise 55 · views 110 000 +

Guess you like

Origin blog.csdn.net/Suubyy/article/details/103686311