【异常】org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:

1 Phoenix remote connection can not connect but the local detail abnormal

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/Users/zhangjin/developSoftware/mavenRepository/org/slf4j/slf4j-log4j12/1.7.16/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/Users/zhangjin/mycode/wm/wm-bigdata-etl/zjars/phoenix-4.14.1-cdh5.16.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:
Fri May 31 11:06:02 CST 2019, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68669: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=localhost,16201,1559271031589, seqNum=0
    at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:144)
    at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1197)
    at org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1491)
    at org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2725)
    at org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1114)
    at org.apache.phoenix.compile.CreateTableCompiler$1.execute(CreateTableCompiler.java:192)
    at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
    at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
    at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
    at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
    at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
    at org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1806)
    at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2536)
    at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2499)
    at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
    at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2499)
    at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:256)
    at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
    at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:222)
    at java.sql.DriverManager.getConnection(DriverManager.java:664)
    at java.sql.DriverManager.getConnection(DriverManager.java:208)
    at org.apache.phoenix.mapreduce.util.ConnectionUtil.getConnection(ConnectionUtil.java:113)
    at org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(ConnectionUtil.java:58)
    at org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getSelectColumnMetadataList(PhoenixConfigurationUtil.java:354)
    at org.apache.phoenix.spark.PhoenixRDD.toDataFrame(PhoenixRDD.scala:118)
    at org.apache.phoenix.spark.PhoenixRelation.schema(PhoenixRelation.scala:60)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:403)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:167)
    at org.apache.spark.sql.SQLContext.load(SQLContext.scala:960)
    at com.wm.bigdata.spark.etl.ETLDemo$.main(ETLDemo.scala:40)
    at com.wm.bigdata.spark.etl.ETLDemo.main(ETLDemo.scala)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Fri May 31 11:06:02 CST 2019, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68669: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=localhost,16201,1559271031589, seqNum=0
    at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:320)
    at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:247)
    at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:62)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
    at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327)
    at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:302)
    at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:167)
    at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:162)
    at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867)
    at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:637)
    at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
    at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:424)
    at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1097)
    ... 31 more
Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=68669: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=localhost,16201,1559271031589, seqNum=0
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:169)
    at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:80)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:417)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:723)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:907)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:874)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1246)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
    at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:400)
    at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:204)
    at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:65)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
    at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:397)
    at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:371)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
    ... 4 more
Process finished with exit code 1

2 carefully observed anomalies, found the connection information is localhost, if the machine is of course no problem with access, remote access but certainly there is a problem where the problem lies, start troubleshooting

 

3 friends language that is to set up dns, feel too complicated,

According to this idea also need to set dns, So now we have not understood, had to give up
 
After which referred Zookeeper client 4 is connected, get / hbase / master
Found thrown is localhost, locate the problem here is that, after connecting through Phoenix Zookeeper's address, the domain name is based on the Zookeeper access node inside the store / hbase master value returned by / to locate, then and now is to find ways to change this value
 
After reinitialization, provided hbase-site.xml added nameserver configured to reinitialize
Then the query is a domain name, get, hbase-site.xml configuration, my host name is hdp, all into a host name, and then delete the Zookeeper / hbase directory, restart hbase, re-entering the terminal Zookeeper
Check if it is get / hbase / master hdp to see the host name or correspondence, and that is a success
  <property> 
    <name>hbase.master.dns.nameserver</name> 
    <value>hdp</value> 
    <description>The host name or IP address of the name server (DNS)  
      which a master should use to determine the host name used  
      for communication and display purposes.  
    </description> 
  </property> 

  <property> 
    <name>hbase.regionserver.dns.nameserver</name> 
    <value>hdp</value> 
    <description>The host name or IP address of the name server (DNS)  
      which a region server should use to determine the host name used by the  
      master for communication and display purposes.  
    </description> 
  </property> 
<property>
<name>hbase.rootdir</name>
<value>hdfs://hdp:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.master</name>
<value>hdfs://hdp:60000</value>
</property>

 

  
 
 
 
 
 

 

Guess you like

Origin www.cnblogs.com/QuestionsZhang/p/10959201.html