Kafka的生产者消费者Java操作示例

本文提供Java对Kafka生产者、消费者操作的简单示例:

1.首先看下pom依赖:

<dependency>
   <groupId>org.apache.kafka</groupId>
   <artifactId>kafka-clients</artifactId>
   <version>2.0.0</version>
</dependency>
<dependency>
   <groupId>log4j</groupId>
   <artifactId>log4j</artifactId>
   <version>1.2.17</version>
</dependency>
<dependency>
   <groupId>org.slf4j</groupId>
   <artifactId>slf4j-nop</artifactId>
   <version>1.7.22</version>
</dependency>

2.新建Producer类,其代码如下:

public class Producer {

    public static void main(String[] args){

        int events = 100;
        Properties props = new Properties();
        //集群地址,多个服务器用","分隔
        props.put("bootstrap.servers", "127.0.0.1:9092");
        //key、value的序列化,此处以字符串为例,使用kafka已有的序列化类
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        //props.put("partitioner.class", "com.kafka.demo.Partitioner");//分区操作,此处未写
        props.put("request.required.acks", "1");
        //创建生产者
        Producer<String, String> producer = new KafkaProducer<>(props);
        for (int i = 0; i < events; i++){
            long runtime = new Date().getTime();
            String ip = "192.168.1." + i;
            String msg = runtime + "时间的模拟ip:" + ip;
            //写入名为"test-partition-1"的topic
            ProducerRecord<String, String> producerRecord = new ProducerRecord<>("test-partition-1", "key-"+i, msg);
            producer.send(producerRecord);
            System.out.println("写入test-partition-1:" + msg);
            try {
                Thread.sleep(1000);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        }
        producer.close();
    }
}

3.新建Consumer类,其代码如下:

public class Consumer {

    public static void main(String[] args) {
        Properties props = new Properties();
        //集群地址,多个地址用","分隔
        props.put("bootstrap.servers","127.0.0.1:9092");
        //设置消费者的group id
        props.put("group.id", "group1");
        //如果为真,consumer所消费消息的offset将会自动的同步到zookeeper。如果消费者死掉时,由新的consumer使用继续接替
        props.put("enable.auto.commit", "true");
        //consumer向zookeeper提交offset的频率
        props.put("auto.commit.interval.ms", "1000");
        props.put("session.timeout.ms", "30000");
        //反序列化
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        //创建消费者
        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        // 订阅topic,可以为多个用,隔开,此处订阅了"test-partition-1", "test"这两个主题
        consumer.subscribe(Arrays.asList("test-partition-1", "test"));
        //持续监听
        while(true){
            //poll频率
            ConsumerRecords<String,String> consumerRecords = consumer.poll(100);
                 for(ConsumerRecord<String,String> consumerRecord : consumerRecords){
                     System.out.println("在test-partition-1中读到:" + consumerRecord.value());
            }
        }
    }
}

4.测试,不要忘记启动kafka后再进行测试,最好先启动消费者再启动生产者,这样效果比较好,分别看下生产者和消费者测试结果:

            

测试成功

猜你喜欢

转载自blog.csdn.net/m0_38075425/article/details/81357833