flink 本地环境消费kafka消息

一、前言

基础实现,仅实现了flink正常消费kafka数据,没有对数据进行处理。

二、实现

package scala

import java.util.Properties

import org.apache.flink.api.common.serialization.SimpleStringSchema
import org.apache.flink.streaming.api.{
    
    CheckpointingMode, TimeCharacteristic}
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer

object KafkaStreamJobDemo {
    
    
  def main(args: Array[String]): Unit = {
    
    
    import org.apache.flink.streaming.api.scala._

    val env = StreamExecutionEnvironment.getExecutionEnvironment
    env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
    env.enableCheckpointing(1000)
    env.getCheckpointConfig.setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE)
    env.setParallelism(1)

    val topic = "mytopic"
    val prop = new Properties()
    prop.setProperty("bootstrap.servers","host:port")

    val myConsumer = new FlinkKafkaConsumer(topic, new SimpleStringSchema(), prop)
    val text = env.addSource(myConsumer).print()

    env.execute("First_Kafka_Consumer")
  }
}

三、问题解决

报错信息

Error:(28, 29) could not find implicit value for evidence parameter of type org.apache.flink.api.common.typeinfo.TypeInformation[String]
    val text = env.addSource(myConsumer)

解决方式:在代码中加入如下代码
流作业

import org.apache.flink.streaming.api.scala._

批作业

import org.apache.flink.api.scala._

猜你喜欢

转载自blog.csdn.net/MDJ_D2T/article/details/120976511