SparkStreaming与Kafka的整合(基础)

Kafka作为SparkStreaming的数据源

1、用法以及说明

在工作中需要引入Maven工件以及Spark-streaming-Kafka-0-8_2.11来使用,包内提供的 KafkaUtils对象可以在StreamingContext和JavaStreamingContext中以你的Kafka消息创建出 DStream。
两个核心类,KafkaUtils以及KafkaCluster

2、案例实操

需求:
通过SparkStreaming从Kafka读取数据,并将读取过来的数据做简单计算(WordCount),最终打印到控制台。
(1) 导入依赖

<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
    <version>2.1.1</version>
</dependency>

(2)编写代码

import kafka.serializer.StringDecoder
import org.apache.kafka.clients.consumer.ConsumerConfig
import org.apache.spark.SparkConf
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming.dstream.{DStream, ReceiverInputDStream}
import org.apache.spark.streaming.kafka.KafkaUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}

object KafkaToSpark {

  def main(args: Array[String]): Unit = {

    //1.创建SparkConf对象
    val sparkConf = new SparkConf().setMaster("local[*]").setAppName("App")

    //2.创建StreamingContext对象
    val ssc: StreamingContext = new StreamingContext(sparkConf,Seconds(5))

    val kafkaParas= Map(
      ConsumerConfig.GROUP_ID_CONFIG->"bigdata",
      ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG->"org.apache.kafka.common.serialization.StringDeserializer",
      ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG->"org.apache.kafka.common.serialization.StringDeserializer",
      "zookeeper.connect"->"hadoop102:2181"
    )
 ------"这里需要注意的是kafka的参数属性中设置连接的是zookeeper,如果版本较新的话用bootstrap.server连接Kafka"
 
    //第一个参数是topic名称,第二个是副本数
    val topics = Map(
      "ssTopic"->3
    )

    //[String,String,StringDecoder,StringDecoder]
    "这里的泛型信息必要要加上,否则会报错[String,String,StringDecoder,StringDecoder]"
 val kafkaStream: ReceiverInputDStream[(String, String)] =
   KafkaUtils.createStream[String,String,StringDecoder,StringDecoder](ssc,kafkaParas,topics,StorageLevel.MEMORY_ONLY)



    val flatStream: DStream[String] = kafkaStream.flatMap {
      case (k, v) => {
        v.split(" ")
      }
    }
    val wordCount: DStream[(String, Int)] = flatStream.map((_,1)).reduceByKey(_+_)

    wordCount.print()
	// 让采集器启动
    ssc.start()
    // drvier等待采集器的执行
    ssc.awaitTermination()
  }
}

查看Kafka的topic列表

bin/katopics.sh --zookeeper hadoop102:2181 --list

创建一个topic

bin/katopics.sh --zookeeper hadoop102:2181 --create --topic ssTopic --partitions 2 --replication-factor 2

创建一个控制面板的生产者,往指定的topic里生产消息

bin/kaconsole-producer.sh --broker-list hadoop102:9092 --topic ssTopic

猜你喜欢

转载自blog.csdn.net/qq_26502245/article/details/88618081