Kylin有关HBase的协处理器

使用Kylin做报表的时候,执行sql报错:

org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop
.hbase.DoNotRetryIOException: Coprocessor passed deadline! Maybe server is overloaded
        at org.apache.kylin.storage.hbase.cube.v2.coprocessor.endpoint.CubeVisitService.checkDeadline(CubeVisitService.java:226)
        at org.apache.kylin.storage.hbase.cube.v2.coprocessor.endpoint.CubeVisitService.visitCube(CubeVisitService.java:261)
        at org.apache.kylin.storage.hbase.cube.v2.coprocessor.endpoint.generated.CubeVisitProtos$CubeVisitService.callMethod(CubeVisitProtos.java:5555)
        at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7996)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1986)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1968)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)

        at org.apache.calcite.avatica.Helper.createException(Helper.java:56)
        at org.apache.calcite.avatica.Helper.createException(Helper.java:41)
        at org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:156)
        at org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:218)
        at org.apache.kylin.rest.service.QueryService.executeRequest(QueryService.java:938)
        at org.apache.kylin.rest.service.QueryService.queryWithSqlMassage(QueryService.java:641)
        at org.apache.kylin.rest.service.QueryService.query(QueryService.java:208)
        at org.apache.kylin.rest.service.QueryService.queryAndUpdateCache(QueryService.java:468)
        at org.apache.kylin.rest.service.QueryService.doQueryWithCache(QueryService.java:429)
        at org.apache.kylin.rest.service.QueryService.doQueryWithCache(QueryService.java:367)
        at org.apache.kylin.rest.controller.QueryController.query(QueryController.java:87)
        at sun.reflect.GeneratedMethodAccessor130.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205)
        at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:133)
        at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:97)
        at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:827)
        at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:738)
        at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85)
        at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:967)
        at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:901)
        at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970)
        at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:872)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:650)
        at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
        at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317)
        at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127)
        at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:91)
        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
        at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:114)
        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
        at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:137)
        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
        at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:111)
        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
        at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:170)
        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
        at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63)
        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
        at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:215)
        at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
        at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:200)
        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
        at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:116)
        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
        at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:64)
        at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
        at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:56)
        at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
        at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:105)
        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
        at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:214)
        at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:177)
        at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346)
        at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:262)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
        at com.thetransactioncompany.cors.CORSFilter.doFilter(CORSFilter.java:209)
        at com.thetransactioncompany.cors.CORSFilter.doFilter(CORSFilter.java:244)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
        at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:219)
        at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:110)
        at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:494)
        at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:169)
        at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:104)
        at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:1025)
        at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:445)
        at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1137)
        at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:637)
        at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: Coprocessor passed deadl
ine! Maybe server is overloaded
        at org.apache.kylin.storage.hbase.cube.v2.coprocessor.endpoint.CubeVisitService.checkDeadline(CubeVisitService.java:226)
        at org.apache.kylin.storage.hbase.cube.v2.coprocessor.endpoint.CubeVisitService.visitCube(CubeVisitService.java:261)
        at org.apache.kylin.storage.hbase.cube.v2.coprocessor.endpoint.generated.CubeVisitProtos$CubeVisitService.callMethod(CubeVisitProtos.java:5555)
        at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7996)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1986)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1968)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)

        at com.google.common.base.Throwables.propagate(Throwables.java:160)
        at org.apache.kylin.storage.hbase.cube.v2.ExpectedSizeIterator.checkState(ExpectedSizeIterator.java:101)
        at org.apache.kylin.storage.hbase.cube.v2.ExpectedSizeIterator.next(ExpectedSizeIterator.java:65)
        at org.apache.kylin.storage.hbase.cube.v2.ExpectedSizeIterator.next(ExpectedSizeIterator.java:32)
        at com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
        at org.apache.kylin.storage.gtrecord.SortMergedPartitionResultIterator.hasNext(SortMergedPartitionResultIterator.java:70)
        at com.google.common.collect.Iterators$PeekingImpl.hasNext(Iterators.java:1216)
        at org.apache.kylin.gridtable.GTStreamAggregateScanner$AbstractStreamMergeIterator.hasNext(GTStreamAggregateScanner.java:80)
        at org.apache.kylin.storage.gtrecord.SegmentCubeTupleIterator.hasNext(SegmentCubeTupleIterator.java:159)
        at com.google.common.collect.Iterators$5.hasNext(Iterators.java:596)
        at org.apache.kylin.storage.gtrecord.SequentialCubeTupleIterator.hasNext(SequentialCubeTupleIterator.java:144)
        at org.apache.kylin.query.enumerator.OLAPEnumerator.moveNext(OLAPEnumerator.java:63)
        at Baz$1$1.moveNext(Unknown Source)
        at org.apache.calcite.linq4j.EnumerableDefaults.groupBy_(EnumerableDefaults.java:825)
        at org.apache.calcite.linq4j.EnumerableDefaults.groupBy(EnumerableDefaults.java:761)
        at org.apache.calcite.linq4j.DefaultEnumerable.groupBy(DefaultEnumerable.java:302)
        at Baz.bind(Unknown Source)
        at org.apache.calcite.jdbc.CalcitePrepare$CalciteSignature.enumerable(CalcitePrepare.java:365)
        at org.apache.calcite.jdbc.CalciteConnectionImpl.enumerable(CalciteConnectionImpl.java:301)
        at org.apache.calcite.jdbc.CalciteMetaImpl._createIterable(CalciteMetaImpl.java:559)
        at org.apache.calcite.jdbc.CalciteMetaImpl.createIterable(CalciteMetaImpl.java:550)
        at org.apache.calcite.avatica.AvaticaResultSet.execute(AvaticaResultSet.java:204)
        at org.apache.calcite.jdbc.CalciteResultSet.execute(CalciteResultSet.java:67)
        at org.apache.calcite.jdbc.CalciteResultSet.execute(CalciteResultSet.java:44)
        at org.apache.calcite.avatica.AvaticaConnection$1.execute(AvaticaConnection.java:630)
        at org.apache.calcite.jdbc.CalciteMetaImpl.prepareAndExecute(CalciteMetaImpl.java:619)
        at org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:638)
        at org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:149)
        ... 83 more
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: Coprocessor passed deadline! Maybe server is overloa
ded
        at org.apache.kylin.storage.hbase.cube.v2.coprocessor.endpoint.CubeVisitService.checkDeadline(CubeVisitService.java:226)
        at org.apache.kylin.storage.hbase.cube.v2.coprocessor.endpoint.CubeVisitService.visitCube(CubeVisitService.java:261)
        at org.apache.kylin.storage.hbase.cube.v2.coprocessor.endpoint.generated.CubeVisitProtos$CubeVisitService.callMethod(CubeVisitProtos.java:5555)
        at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7996)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1986)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1968)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
        at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:332)
        at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1637)
        at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:104)
        at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:94)
        at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
        at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:107)
        at org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callMethod(CoprocessorRpcChannel.java:56)
        at org.apache.kylin.storage.hbase.cube.v2.coprocessor.endpoint.generated.CubeVisitProtos$CubeVisitService$Stub.visitCube(CubeVisitProtos.java:5616)
        at org.apache.kylin.storage.hbase.cube.v2.CubeHBaseEndpointRPC$2.call(CubeHBaseEndpointRPC.java:297)
        at org.apache.kylin.storage.hbase.cube.v2.CubeHBaseEndpointRPC$2.call(CubeHBaseEndpointRPC.java:266)
        at org.apache.hadoop.hbase.client.HTable$15.call(HTable.java:1807)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        ... 1 more
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): org.apache.hadoop.hbase.DoNotRetryIOExceptio
n: Coprocessor passed deadline! Maybe server is overloaded
        at org.apache.kylin.storage.hbase.cube.v2.coprocessor.endpoint.CubeVisitService.checkDeadline(CubeVisitService.java:226)
        at org.apache.kylin.storage.hbase.cube.v2.coprocessor.endpoint.CubeVisitService.visitCube(CubeVisitService.java:261)
        at org.apache.kylin.storage.hbase.cube.v2.coprocessor.endpoint.generated.CubeVisitProtos$CubeVisitService.callMethod(CubeVisitProtos.java:5555)
        at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7996)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1986)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1968)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)

        at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1272)
        at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
        at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execService(ClientProtos.java:34118)
        at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1633)
        ... 13 more
org.apache.kylin.common.exceptions.KylinTimeoutException: coprocessor timeout after scanning 14655900 rows
        at org.apache.kylin.storage.hbase.cube.v2.CubeHBaseEndpointRPC.getCoprocessorException(CubeHBaseEndpointRPC.java:505)
        at org.apache.kylin.storage.hbase.cube.v2.CubeHBaseEndpointRPC.access$300(CubeHBaseEndpointRPC.java:77)
        at org.apache.kylin.storage.hbase.cube.v2.CubeHBaseEndpointRPC$3.update(CubeHBaseEndpointRPC.java:355)
        at org.apache.kylin.storage.hbase.cube.v2.CubeHBaseEndpointRPC$3.update(CubeHBaseEndpointRPC.java:323)
        at org.apache.hadoop.hbase.client.HTable$15.call(HTable.java:1810)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
2019-06-11 15:08:53,509 ERROR [pool-11-thread-6] v2.CubeHBaseEndpointRPC:426 : <sub-thread for Query 60359a02-be1f-90c4-8e02-526f83455daf GTScanRequest 641a3c98>Error when visiting cubes by endpoint
org.apache.kylin.common.exceptions.KylinTimeoutException: coprocessor timeout after scanning 14309900 rows
        at org.apache.kylin.storage.hbase.cube.v2.CubeHBaseEndpointRPC.getCoprocessorException(CubeHBaseEndpointRPC.java:505)
        at org.apache.kylin.storage.hbase.cube.v2.CubeHBaseEndpointRPC.access$300(CubeHBaseEndpointRPC.java:77)
        at org.apache.kylin.storage.hbase.cube.v2.CubeHBaseEndpointRPC$3.update(CubeHBaseEndpointRPC.java:355)
        at org.apache.kylin.storage.hbase.cube.v2.CubeHBaseEndpointRPC$3.update(CubeHBaseEndpointRPC.java:323)
        at org.apache.hadoop.hbase.client.HTable$15.call(HTable.java:1810)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
2019-06-11 15:08:53,511 DEBUG [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] gtrecord.GTCubeStorageQueryBase:331 : Need storage aggregation
2019-06-11 15:08:53,512 INFO  [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] gtrecord.GTCubeStorageQueryBase:562 : exactAggregation is false because need storage aggregation
2019-06-11 15:08:53,512 INFO  [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] gtrecord.GTCubeStorageQueryBase:309 : Filter column set for query: %s
2019-06-11 15:08:53,513 INFO  [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] gtrecord.GTCubeStorageQueryBase:318 : Filter mask is: {0}
2019-06-11 15:08:53,513 DEBUG [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] gtrecord.GTCubeStorageQueryBase:433 : storageLimitLevel set to LIMIT_ON_RETURN_SIZE because groupD is not clustered at head, groupsD: {0} with cuboid columns: {1}
2019-06-11 15:08:53,514 DEBUG [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] gtrecord.GTCubeStorageQueryBase:468 : storageLimitLevel set to NO_LIMIT because {0} isDimensionAsMetric 
2019-06-11 15:08:53,514 DEBUG [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] gtrecord.GTCubeStorageQueryBase:468 : storageLimitLevel set to NO_LIMIT because {0} isDimensionAsMetric 
2019-06-11 15:08:53,514 INFO  [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] gtrecord.GTCubeStorageQueryBase:525 : Can not push down having filter, must have only one segment
2019-06-11 15:08:53,515 INFO  [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] gtrecord.GTCubeStorageQueryBase:189 : Cuboid identified: cube=aaudit_concurrency_cube, cuboidId=511, groupsD=[DEFAULT.SSE_AAUDIT_CONCURRENCY.DT, DEFAULT.SSE_AAUDIT_CONCURRENCY.SIP, DEFAULT.SSE_AAUDIT_CONCURRENCY.CONCURRENCY2, DEFAULT.SSE_AAUDIT_CONCURRENCY.CONCURRENCY1], filterD=[], limitPushdown=2147483647, limitLevel=NO_LIMIT, storageAggr=true
2019-06-11 15:08:53,515 DEBUG [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] common.SegmentPruner:98 : Pruner passed on segment aaudit_concurrency_cube[20190430000000_20190528000000]
2019-06-11 15:08:53,515 DEBUG [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] common.SegmentPruner:98 : Pruner passed on segment aaudit_concurrency_cube[20190528000000_20190604000000]
2019-06-11 15:08:53,516 DEBUG [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] common.SegmentPruner:68 : Prune segment aaudit_concurrency_cube[20190604000000_20190605000000] due to 0 input record
2019-06-11 15:08:53,516 DEBUG [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] common.SegmentPruner:68 : Prune segment aaudit_concurrency_cube[20190605000000_20190606000000] due to 0 input record
2019-06-11 15:08:53,516 DEBUG [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] common.SegmentPruner:68 : Prune segment aaudit_concurrency_cube[20190606000000_20190607000000] due to 0 input record
2019-06-11 15:08:53,517 DEBUG [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] common.SegmentPruner:68 : Prune segment aaudit_concurrency_cube[20190607000000_20190608000000] due to 0 input record
2019-06-11 15:08:53,517 DEBUG [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] common.SegmentPruner:68 : Prune segment aaudit_concurrency_cube[20190608000000_20190609000000] due to 0 input record
2019-06-11 15:08:53,517 DEBUG [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] common.SegmentPruner:68 : Prune segment aaudit_concurrency_cube[20190609000000_20190610000000] due to 0 input record
2019-06-11 15:08:53,517 DEBUG [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] common.SegmentPruner:68 : Prune segment aaudit_concurrency_cube[20190610000000_20190611000000] due to 0 input record
2019-06-11 15:08:53,518 INFO  [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] gtrecord.CubeSegmentScanner:60 : Init CubeSegmentScanner for segment 20190430000000_20190528000000
2019-06-11 15:08:53,519 DEBUG [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] v2.CubeHBaseRPC:315 : hbase.rpc.timeout = 60000 ms, use 54000 ms as timeout for coprocessor
2019-06-11 15:08:53,519 INFO  [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] v2.CubeHBaseEndpointRPC:164 : Serialized scanRequestBytes 1461 bytes, rawScanBytesString 57 bytes
2019-06-11 15:08:53,520 INFO  [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] v2.CubeHBaseEndpointRPC:167 : The scan 50b72e9c for segment aaudit_concurrency_cube[20190430000000_20190528000000] is as below with 1 separate raw scans, shard part of start/end key is set to 0
2019-06-11 15:08:53,520 INFO  [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] v2.CubeHBaseRPC:288 : Visiting hbase table KYLIN_Y2OJ280UKT: cuboid require post aggregation, from 417 to 511 Start: \x00\x00\x00\x00\x00\x00\x00\x00\x01\xFF\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 (\x00\x00\x00\x00\x00\x00\x00\x00\x01\xFF\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00) Stop:  \x00\x00\x00\x00\x00\x00\x00\x00\x01\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x00 (\x00\x00\x00\x00\x00\x00\x00\x00\x01\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x00), No Fuzzy Key
2019-06-11 15:08:53,521 DEBUG [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] v2.CubeHBaseEndpointRPC:174 : Submitting rpc to 1 shards starting from shard 0, scan range count 1
2019-06-11 15:08:53,521 DEBUG [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] v2.CubeHBaseEndpointRPC:220 : Submitting rpc to 1 shards starting from shard 0, scan range count 1
2019-06-11 15:08:53,522 DEBUG [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] common.KylinConfig:328 : KYLIN_CONF property was not set, will seek KYLIN_HOME env variable
2019-06-11 15:08:53,522 INFO  [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] common.KylinConfig:334 : Use KYLIN_HOME=/opt/kylin2.6.1
2019-06-11 15:08:53,523 WARN  [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] common.BackwardCompatibilityConfig:95 : Config 'kylin.job.jar' is deprecated, use 'kylin.engine.mr.job-jar' instead
2019-06-11 15:08:53,523 WARN  [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] common.BackwardCompatibilityConfig:95 : Config 'kylin.coprocessor.local.jar' is deprecated, use 'kylin.storage.hbase.coprocessor-local-jar' instead
2019-06-11 15:08:53,524 INFO  [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] gtrecord.CubeSegmentScanner:60 : Init CubeSegmentScanner for segment 20190528000000_20190604000000
2019-06-11 15:08:53,524 INFO  [kylin-coproc--pool2-t157] v2.CubeHBaseEndpointRPC:277 : Query-39127a15-acd7-378c-5fb0-1e9cb0271800: send request to the init region server data-3.novalocal on table KYLIN_Y2OJ280UKT 
2019-06-11 15:08:53,524 DEBUG [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] v2.CubeHBaseRPC:315 : hbase.rpc.timeout = 60000 ms, use 54000 ms as timeout for coprocessor
2019-06-11 15:08:53,525 INFO  [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] v2.CubeHBaseEndpointRPC:164 : Serialized scanRequestBytes 1461 bytes, rawScanBytesString 55 bytes
2019-06-11 15:08:53,525 INFO  [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] v2.CubeHBaseEndpointRPC:167 : The scan 4549edfd for segment aaudit_concurrency_cube[20190528000000_20190604000000] is as below with 1 separate raw scans, shard part of start/end key is set to 0
2019-06-11 15:08:53,526 INFO  [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] v2.CubeHBaseRPC:288 : Visiting hbase table KYLIN_XK6ITNOC1M: cuboid require post aggregation, from 417 to 511 Start: \x00\x00\x00\x00\x00\x00\x00\x00\x01\xFF\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 (\x00\x00\x00\x00\x00\x00\x00\x00\x01\xFF\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00) Stop:  \x00\x00\x00\x00\x00\x00\x00\x00\x01\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x00 (\x00\x00\x00\x00\x00\x00\x00\x00\x01\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x00), No Fuzzy Key
2019-06-11 15:08:53,526 DEBUG [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] v2.CubeHBaseEndpointRPC:174 : Submitting rpc to 1 shards starting from shard 0, scan range count 1
2019-06-11 15:08:53,527 DEBUG [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] v2.CubeHBaseEndpointRPC:220 : Submitting rpc to 1 shards starting from shard 0, scan range count 1
2019-06-11 15:08:53,527 DEBUG [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] common.KylinConfig:328 : KYLIN_CONF property was not set, will seek KYLIN_HOME env variable
2019-06-11 15:08:53,527 INFO  [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] common.KylinConfig:334 : Use KYLIN_HOME=/opt/kylin2.6.1
2019-06-11 15:08:53,528 WARN  [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] common.BackwardCompatibilityConfig:95 : Config 'kylin.job.jar' is deprecated, use 'kylin.engine.mr.job-jar' instead
2019-06-11 15:08:53,528 WARN  [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] common.BackwardCompatibilityConfig:95 : Config 'kylin.coprocessor.local.jar' is deprecated, use 'kylin.storage.hbase.coprocessor-local-jar' instead
2019-06-11 15:08:53,529 INFO  [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] gtrecord.SegmentCubeTupleIterator:95 : Using GTStreamAggregateScanner to pre-aggregate storage partition.
2019-06-11 15:08:53,529 INFO  [kylin-coproc--pool2-t158] v2.CubeHBaseEndpointRPC:277 : Query-39127a15-acd7-378c-5fb0-1e9cb0271800: send request to the init region server data-3.novalocal on table KYLIN_XK6ITNOC1M 
2019-06-11 15:08:53,529 INFO  [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] gtrecord.SegmentCubeTupleIterator:95 : Using GTStreamAggregateScanner to pre-aggregate storage partition.
2019-06-11 15:08:53,530 INFO  [Query 39127a15-acd7-378c-5fb0-1e9cb0271800-72] gtrecord.SequentialCubeTupleIterator:78 : Using Iterators.concat to merge segment results

这两个错实际上是因为返回的数据量过大而导致的超出时间限制。有几个维度的组合太高了,还有超高基数维度的 count distinct 也很大,返回数据量过大就会出现timeout。

解决思路,
     1.timeout时间限制在提高到下,但是没有个限度,多高算高,因为这次是做季报,并且数据量不是很大的情况下, 如果数据量上来了,那就完犊子了。。
     2.不要在Kylin做这么高危的动作,类似于这种基数的查询可以直接用hive做。

猜你喜欢

转载自blog.csdn.net/xiaozhaoshigedasb/article/details/91434516