KafkaProducer & KafkaConsumer API 使用

http://kafka.apache.org/

普通java程序,gradle 引入jar

compile "org.slf4j:slf4j-nop:1.7.2"
compile "org.apache.kafka:kafka-clients:2.2.0"

创建topic

bin/kafka-topics.sh --create --zookeeper 172.16.227.250:2181,172.16.227.129:2181,172.16.227.130:2181 --replication-factor 3 --partitions 3 --topic chat

在这里插入图片描述

KafkaProducer

 void send(){
        Properties props = new Properties();
        props.put("bootstrap.servers", "172.16.227.250:9092,172.16.227.129:9092,172.16.227.130:9092");
        props.put("acks", "all");
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

        Producer<String, String> producer = new KafkaProducer<>(props);
        for (int i = 0; i < 10; i++) {
            producer.send(new ProducerRecord<>("chat", "" + Integer.toString(i), Integer.toString(i)));
        }
        producer.close();
        System.out.println("producer closed");
    }

KafkaConsumer

参考: http://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html

  void consumer(){
        Properties props = new Properties();
        props.setProperty("bootstrap.servers", "172.16.227.250:9092,172.16.227.129:9092,172.16.227.130:9092");
//        props.setProperty("zookeeper.connect", "172.16.227.250:2181,172.16.227.129:2181,172.16.227.130:2181");
        props.setProperty("group.id", "chat-room-5");//消费组id
//        props.setProperty("enable.auto.commit", "true");
//        props.setProperty("auto.commit.interval.ms", "1000");
        props.put("auto.offset.reset", "earliest");
        props.setProperty("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.setProperty("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        consumer.subscribe(Collections.singletonList("chat"));//chat主题

        try {
            while (true) {
                ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
                for(ConsumerRecord<String, String> record : records) {
                    System.out.printf("offset = %d, key = %s, value = %s%n",record.offset(),record.key(),record.value());
                }
            }
        } finally {
            consumer.close();
        }
    }

output

offset = 0, key = 1, value = 1
offset = 1, key = 5, value = 5
offset = 2, key = 7, value = 7
offset = 3, key = 8, value = 8
offset = 0, key = 0, value = 0
offset = 1, key = 2, value = 2
offset = 2, key = 3, value = 3
offset = 3, key = 9, value = 9
offset = 0, key = 4, value = 4
offset = 1, key = 6, value = 6

在一个partitionrecords是按顺序返回的,例如partition 0
在这里插入图片描述
消费了所有partition,数据返回不是KafkaProducer 写入的顺序(即多个 partition 不是有序的)

注意点

  • offset已经被记录,同一个 group.id 消费后再消费,会获取不到数据,需要换个 group.id
  1. offset的设置,props.put("auto.offset.reset", "earliest");group.id 没有设置offset 会读取最新的offset,也会导致读取不到数据
  2. Kafka默认是定期自动提交位移的enable.auto.commit=true

参考

《 kafka中partition和消费者对应关系》
作者:愚公300代
地址:https://www.jianshu.com/p/6233d5341dfe

发布了441 篇原创文章 · 获赞 110 · 访问量 57万+

猜你喜欢

转载自blog.csdn.net/qq_26437925/article/details/97611693
今日推荐