Spark Streaming采用Direct Approach(No Receiver)方式连接Kafka消费消息时报错

一、报错信息

Exception in thread "main" java.lang.ClassCastException: kafka.cluster.BrokerEndPoint cannot be cast to kafka.cluster.Broker
 

18/12/21 11:09:18 INFO BlockManagerMaster: Trying to register BlockManager
18/12/21 11:09:18 INFO BlockManagerMasterEndpoint: Registering block manager localhost:14308 with 1095.0 MB RAM, BlockManagerId(driver, localhost, 14308)
18/12/21 11:09:18 INFO BlockManagerMaster: Registered BlockManager
18/12/21 11:09:19 INFO VerifiableProperties: Verifying properties
18/12/21 11:09:19 INFO VerifiableProperties: Property group.id is overridden to 
18/12/21 11:09:19 INFO VerifiableProperties: Property zookeeper.connect is overridden to 
Exception in thread "main" java.lang.ClassCastException: kafka.cluster.BrokerEndPoint cannot be cast to kafka.cluster.Broker
    at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2$$anonfun$3$$anonfun$apply$6$$anonfun$apply$7.apply(KafkaCluster.scala:97)
    at scala.Option.map(Option.scala:145)
    at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2$$anonfun$3$$anonfun$apply$6.apply(KafkaCluster.scala:97)
    at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2$$anonfun$3$$anonfun$apply$6.apply(KafkaCluster.scala:94)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)
    at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105)
    at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2$$anonfun$3.apply(KafkaCluster.scala:94)
    at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2$$anonfun$3.apply(KafkaCluster.scala:93)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
    at scala.collection.immutable.Set$Set1.foreach(Set.scala:74)
    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)
    at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105)
    at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2.apply(KafkaCluster.scala:93)
    at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2.apply(KafkaCluster.scala:92)
    at scala.util.Either$RightProjection.flatMap(Either.scala:523)
    at org.apache.spark.streaming.kafka.KafkaCluster.findLeaders(KafkaCluster.scala:92)
    at org.apache.spark.streaming.kafka.KafkaCluster.getLeaderOffsets(KafkaCluster.scala:186)
    at org.apache.spark.streaming.kafka.KafkaCluster.getLeaderOffsets(KafkaCluster.scala:168)
    at org.apache.spark.streaming.kafka.KafkaCluster.getLatestLeaderOffsets(KafkaCluster.scala:157)
    at org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$5.apply(KafkaUtils.scala:215)
    at org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$5.apply(KafkaUtils.scala:211)
    at scala.util.Either$RightProjection.flatMap(Either.scala:523)
    at org.apache.spark.streaming.kafka.KafkaUtils$.getFromOffsets(KafkaUtils.scala:211)
    at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:484)
    at com.hp.spark.streaming.KafkaDirectWordCount$.main(KafkaDirectWordCount.scala:24)
    at com.hp.spark.streaming.KafkaDirectWordCount.main(KafkaDirectWordCount.scala)
18/12/21 11:09:20 INFO SparkContext: Invoking stop() from shutdown hook

二、环境配置

Windows+IDEA本机调试阶段

pom文件中添加的依赖

1. scala版本:2.10.5

2. spark版本:2.2.2

3. Spark Streaming -Kafka版本:0-8_2.10

但是在本工程由于之前项目开发需要(spark版本为1.6.3 ),添加了“spark-assembly-1.6.3-Hadoop2.6.0.jar”,如下图。

三、报错原因

因为添加了“spark-assembly-1.6.3-Hadoop2.6.0.jar”,导致pom文件中引入的spark依赖失效,即程序启动提示的spark版本为1.6.3

原因:版本不兼容导致。

四、解决方案

1. 方案一

采用spark2.2.2 版本

如果工程中不需要用到spark1.6.3相关的jar包,可以直接移除“spark-assembly-1.6.3-Hadoop2.6.0.jar”,然后maven-->reimport 更新下,再运行程序就不会报错了。

2. 方案二

由于不能移除“spark-assembly-1.6.3-Hadoop2.6.0.jar”,需要采用spark1.6.3的版本。我们先看下spark的官方文档:

https://spark.apache.org/docs/1.6.2/streaming-kafka-integration.html 

看到上面的pom文件依赖,大致就能看出问题所在了:

先去掉pom文件中关于kafka的所有依赖,然后参照官方文档添加如下依赖:

<dependency>
     <groupId>org.apache.spark</groupId>
     <artifactId>spark-streaming-kafka_2.10</artifactId>
     <version>1.6.3</version>
</dependency>

maven-->reimport

再启动程序就不会报错了。

猜你喜欢

转载自blog.csdn.net/u011817217/article/details/85158954
今日推荐