外网无法访问hdfs文件系统:SparkSQL本地scala代码连接服务器hive报错:INFO DFSClient: Could not obtain BP-397724921-127.0.0.1-

一、问题描述

本地开发sparkSQL代码,连接hive,使用hivecontext连接报错:

(1)代码:

object _02hivecontext {
  def main(args: Array[String]): Unit = {


    //1)创建相关的context
    val sparkconf=new SparkConf().setAppName("Hivesql").setMaster("local[2]")

    val sc=SparkContext.getOrCreate(sparkconf)
    val hiveContext=new HiveContext(sc)

    //2)相关处理
    hiveContext.table("emp").show()

    //3)关闭资源
    sc.stop()


  }
}

(2)报错:

19/01/18 07:36:57 INFO DFSClient: Could not obtain BP-397724921-127.0.0.1-1542712516755:blk_1073746318_5503 from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry...
19/01/18 07:36:57 WARN DFSClient: DFS chooseDataNode: got # 1 IOException, will wait for 1090.0845058769598 msec.
19/01/18 07:37:19 WARN DFSClient: Failed to connect to /172.16.0.147:50010 for block, add to deadNodes and continue. java.net.ConnectException: Connection timed out: no further information
java.net.ConnectException: Connection timed out: no further information
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
	at org.apache.hadoop.hdfs.DFSInputStream.newTcpPeer(DFSInputStream.java:955)
	at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1107)
	at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:533)
	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:749)
	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:793)
	at java.io.DataInputStream.read(DataInputStream.java:100)
	at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:211)
	at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
	at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:206)
	at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:45)
	at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:266)
	at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:211)
	at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
	at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:231)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:225)
	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:826)
	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:826)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:99)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
19/01/18 07:37:19 INFO DFSClient: Could not obtain BP-397724921-127.0.0.1-1542712516755:blk_1073746318_5503 from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry...
19/01/18 07:37:19 WARN DFSClient: DFS chooseDataNode: got # 2 IOException, will wait for 3919.861063005625 msec.

自己百思不得解,因为之前遇到过类似的问题,是把hive-site.xml放到main目录下的resources文件夹下即可。但是这个却解决不了,困扰了一个多星期!

二、问题原因

1.认为hdfs问题,但是查看50070主页,是没有问题的。

2.认为是datanode,但是看datanode日志,也没有问题,没有丢失块。

3.服务器本地hive-site.xml也放到了,spark的conf里面了。

4.怀疑sparkSQL有问题,但是,服务器打开SparkSQL却是可以访问hive的。

5.metastore也确认打开了。

本来已经绝望了,但是搜到类似报错可能是:外网无法访问hdfs文件系统,就尝试了一下,没想到竟然真的是这个问题。这个坑爹的问题。本质原因是:

本地机器和服务器不在一个局域网内,hadoop是使用内网ip作为机器相互通信。所以,在本地访问到namenode没有问题,因为配置了hosts,但是namenode会返回datanode的内网ip和端口,从而可以读写数据。关键这个返回内网ip我们在外网是访问不到的,所以会引起报错。

三、解决

在代码中设置dfs.client.use.datanode.hostname属性为true,从而使namenode返回给datanode的域名,从而可以访问datanode。解决代码如下:

object _02hivecontext {
  def main(args: Array[String]): Unit = {


    //1)创建相关的context
    val sparkconf=new SparkConf().setAppName("Hivesql").setMaster("local[2]")
        .set("dfs.client.use.datanode.hostname","true")
    val sc=SparkContext.getOrCreate(sparkconf)
    val hiveContext=new HiveContext(sc)

    //2)相关处理
    hiveContext.table("emp").show()

    //3)关闭资源
    sc.stop()


  }
}

服务器真的是踩坑日记。结果:

+-----+------+---------+----+----------+------+------+------+
|empno| ename|      job| mgr|  hiredate|   sal|  comm|deptno|
+-----+------+---------+----+----------+------+------+------+
| 7369| SMITH|    CLERK|7902|1980-12-17| 800.0|  null|    20|
| 7499| ALLEN| SALESMAN|7698| 1981-2-20|1600.0| 300.0|    30|
| 7521|  WARD| SALESMAN|7698| 1981-2-22|1250.0| 500.0|    30|
| 7566| JONES|  MANAGER|7839|  1981-4-2|2975.0|  null|    20|
| 7654|MARTIN| SALESMAN|7698| 1981-9-28|1250.0|1400.0|    30|
| 7698| BLAKE|  MANAGER|7839|  1981-5-1|2850.0|  null|    30|
| 7782| CLARK|  MANAGER|7839|  1981-6-9|2450.0|  null|    10|
| 7788| SCOTT|  ANALYST|7566| 1987-4-19|3000.0|  null|    20|
| 7839|  KING|PRESIDENT|null|1981-11-17|5000.0|  null|    10|
| 7844|TURNER| SALESMAN|7698|  1981-9-8|1500.0|   0.0|    30|
| 7876| ADAMS|    CLERK|7788| 1987-5-23|1100.0|  null|    20|
| 7900| JAMES|    CLERK|7698| 1981-12-3| 950.0|  null|    30|
| 7902|  FORD|  ANALYST|7566| 1981-12-3|3000.0|  null|    20|
| 7934|MILLER|    CLERK|7782| 1982-1-23|1300.0|  null|    10|
+-----+------+---------+----+----------+------+------+------+

猜你喜欢

转载自blog.csdn.net/u010886217/article/details/86533616