kafka的Streams API

kafka的Streams API
第一步:创建一个topic
hadoop01服务器使用以下命令来常见一个topic 名称为test2

cd /export/servers/kafka_2.11-1.0.0/bin/kafka-topics.sh --create --partitions 3 --replication-factor 2 --topic test2 --zookeeper hadoop01:2181,hadoop02:2181,hadoop03:2181

第二步:开发StreamAPI

 public static void main(String[] args) {
        Properties props = new Properties();
        //设置程序的唯一标识
        props.put(StreamsConfig.APPLICATION_ID_CONFIG, "wordcount-application");
        //设置kafka集群
        props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "hadoop01:9092,hadoop02:9092,hadoop03:9092");
        //设置序列化与反序列化
        props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
        props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());
        //实例一个计算逻辑
        StreamsBuilder streamsBuilder = new StreamsBuilder();
        //设置计算逻辑   stream:读取                                                         to:写入
        streamsBuilder.stream("teacher").mapValues(line->line.toString().toUpperCase()).to("18BD-50");
        //构建Topology对象(拓扑,流程)
        //final Topology topology = streamsBuilder.build();
        Topology build = streamsBuilder.build();
        //实例 kafka流
      // KafkaStreams streams = new KafkaStreams(topology, props);
        KafkaStreams kafkaStreams = new KafkaStreams(build, props);
        kafkaStreams.start();
        //启动流计算
       // streams.start();
  }

第三步:生产数据
hadoop01执行以下命令,向test这个topic当中生产数据

cd /export/servers/kafka_2.11-1.0.0/bin/kafka-console-producer.sh --broker-list hadoop01:9092,hadoop02:9092,hadoop03:9092 --topic test

第四步:消费数据
hadoop02执行一下命令消费test2这个topic当中的数据

cd /export/servers/kafka_2.11-1.0.0/bin/kafka-console-consumer.sh --from-beginning --topic test2 --zookeeper hadoop01:2181,hadoop02:2181,hadoop03:2181
发布了213 篇原创文章 · 获赞 406 · 访问量 24万+

猜你喜欢

转载自blog.csdn.net/qq_45765882/article/details/105279171