kafka的安装与demo应用java

下载kafka编译后的压缩包(文件名称中没有src) https://kafka.apache.org/downloads

参考文档 https://kafka.apache.org/quickstart

这里我将压缩包放在了/home/wl/mq/ 目录下(我的版本为2.5.1)

解压

tar -xzf kafka_2.12-2.5.1
cd kafka_2.12-2.5.1

kafka启动需要zookeeper,并且kafka自带了zookeeper

启动kafka自带的zookeeper

bin/zookeeper-server-start.sh config/zookeeper.properties

后台启动命令

bin/zookeeper-server-start.sh -daemon config/zookeeper.properties

启动broker服务

先修改config目录下server.properties,增加下面的配置

advertised.listeners=PLAINTEXT://192.168.92.128:9092

192.168.92.128为我的kafka服务器的ip地址。若没有这个配置,使用java客户端连接kafka会报错

启动命令

bin/kafka-server-start.sh  config/server.properties

后台启动命令

bin/kafka-server-start.sh -daemon  config/server.properties

zookeeper与kafka server的关闭

bin/kafka-server-stop.sh
bin/zookeeper-server-stop.sh

kafka的环境准备完毕

java demo如下(我的项目是spring-boot项目)

引入依赖

    <dependency>
      <groupId>org.apache.kafka</groupId>
      <artifactId>kafka-clients</artifactId>
      <version>2.5.1</version>
    </dependency>

配置生产者和消费者KafkaConfig.java

package com.wl.mq.config;

import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

import java.util.Properties;

/**
 * Created by Administrator on 2021/3/10.
 */
@Configuration
public class KafkaConfig {

    @Bean
    public Producer<String,String> producer(){
        Properties props = new Properties();
        props.put("bootstrap.servers", "192.168.92.128:9092");
        props.put("acks", "all");
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

        return new KafkaProducer<>(props);
    }

    @Bean
    public Consumer<String,String> consumerA(){
        Properties props = new Properties();
        props.setProperty("bootstrap.servers", "192.168.92.128:9092");
        props.setProperty("group.id", "consumer_a");
        props.setProperty("enable.auto.commit", "true");
        props.setProperty("auto.commit.interval.ms", "1000");
        props.setProperty("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.setProperty("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        return new KafkaConsumer<>(props);
    }


}

ConsumerA中的配置group.id相当于rocketMq中的消费组的概念。

在集群环境下,如果Consumer监听的topic相同且group_id相同,则该topic只会被任意一个consumer消费一次。

如果想要实现activemq中的topic广播模式,则监听一个topic的Conumser配置group_id应该各不相同(即监听topic的consumer是不同的实例)。

KafkaProducerService.java

package com.wl.mq.kafka;

import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.springframework.stereotype.Service;

/**
 * Created by Administrator on 2021/3/10.
 */
@Service
public class KafkaProduceService {

    private Producer<String,String> producer;

    @Autowired
    public KafkaProduceService(Producer<String,String> producer){
        this.producer = producer;
    }

    public void sendMessage(String destination,String message){
        producer.send(new ProducerRecord<String, String>(destination,message));
    }

    public void sendMessage(String destination,String key,String message){
        producer.send(new ProducerRecord<String, String>(destination,key,message));
    }

    /**
     *   若指定Partition ID,则PR被发送至指定Partition
     *   若未指定Partition ID,但指定了Key, PR会按照hasy(key)发送至对应Partition
     *   若既未指定Partition ID也没指定Key,PR会按照round-robin模式发送到每个Partition
     *   若同时指定了Partition ID和Key, PR只会发送到指定的Partition (Key不起作用)
     */
    public void sendMessage(String destination,Integer partition,String message){
        producer.send(new ProducerRecord<String, String>(destination,partition,null,message));
    }



}

KafkaConsumerServiceA.java

package com.wl.mq.kafka;

import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import java.time.Duration;
import java.util.Collections;

/**
 * Created by Administrator on 2021/3/10.
 */
@Service
public class KafkaConsumerServiceA implements InitializingBean {

    private Consumer<String,String> consumerA;

    @Autowired
    public KafkaConsumerServiceA(Consumer<String,String> consumerA){
        this.consumerA = consumerA;
    }

    private void initConsumer(){
        consumerA.subscribe(Collections.singleton("kafka-topic"));
        new Thread(new Runnable() {
            @Override
            public void run() {
                while (true) {
                    ConsumerRecords<String, String> records = consumerA.poll(Duration.ofMillis(100));
                    for (ConsumerRecord<String, String> record : records) {
                        System.out.println("================================================");
                        System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
                        System.out.println("================================================");
                    }
                }
            }
        }).start();
    }

    @Override
    public void afterPropertiesSet() throws Exception {
        initConsumer();
    }
}

测试代码

package com.wl.mq;

import com.wl.mq.kafka.KafkaProduceService;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;

/**
 * Created by Administrator on 2021/3/10.
 */
@SpringBootTest(classes = Application.class)
@RunWith(SpringJUnit4ClassRunner.class)
//@Ignore
public class KafkaMqTest {

    @Autowired
    private KafkaProduceService produceService;

    @Test
    public void testSendMessage() throws Exception{
        String destination = "kafka-topic";
        String message = "hello this is kafka message";
        produceService.sendMessage(destination,message);
        Thread.sleep(1000000);
    }
}

测试结果 

consumserA也可以监听其他的topic队列(推荐一个consumer只监听一个topic)。eg:

private void initConsumer(){
        consumerA.subscribe(Collections.singleton("kafka-topic"));
        consumerA.subscribe(Collections.singleton("kafka-topic-1"));
        new Thread(new Runnable() {
            @Override
            public void run() {
                while (true) {
                    ConsumerRecords<String, String> records = consumerA.poll(Duration.ofMillis(100));
                    for (ConsumerRecord<String, String> record : records) {
                        System.out.println(record.topic());
                        System.out.println("================================================");
                        System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
                        System.out.println("================================================");
                    }
                }
            }
        }).start();

    }

在微服务系统中,不同的模块使用的group.id往往是不同的。

如果我们有两个服务,一个是订单服务,一个是活动服务。都需要监听同一个topic.。我们将上面的consumerA当作活动微服务的消费者。下面我们新建一个consumerB当作订单微服务的消费者

修改KafkaConfig.java如下(增加consumerB消费者实例)

package com.wl.mq.config;

import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

import java.util.Properties;

/**
 * Created by Administrator on 2021/3/10.
 */
@Configuration
public class KafkaConfig {

    @Bean
    public Producer<String,String> producer(){
        Properties props = new Properties();
        props.put("bootstrap.servers", "192.168.92.128:9092");
        props.put("acks", "all");
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

        return new KafkaProducer<>(props);
    }

    @Bean
    public Consumer<String,String> consumerA(){
        Properties props = new Properties();
        props.setProperty("bootstrap.servers", "192.168.92.128:9092");
        props.setProperty("group.id", "consumer_a");
        props.setProperty("enable.auto.commit", "true");
        props.setProperty("auto.commit.interval.ms", "1000");
        props.setProperty("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.setProperty("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        return new KafkaConsumer<>(props);
    }

    @Bean
    public Consumer<String,String> consumerB(){
        Properties props = new Properties();
        props.setProperty("bootstrap.servers", "192.168.92.128:9092");
        props.setProperty("group.id", "consumer_b");
        props.setProperty("enable.auto.commit", "true");
        props.setProperty("auto.commit.interval.ms", "1000");
        props.setProperty("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.setProperty("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        return new KafkaConsumer<>(props);
    }
}

增加KafkaConsumerServiceB.java(同样监听kafka-topic队列)

package com.wl.mq.kafka;

import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import java.time.Duration;
import java.util.Collections;

/**
 * Created by Administrator on 2021/3/10.
 */
@Service
public class KafkaConsumerServiceB implements InitializingBean {

    private Consumer<String,String> consumerB;

    @Autowired
    public KafkaConsumerServiceB(Consumer<String,String> consumerB){
        this.consumerB = consumerB;
    }

    private void initConsumer(){
        consumerB.subscribe(Collections.singleton("kafka-topic"));
        new Thread(new Runnable() {
            @Override
            public void run() {
                while (true) {
                    ConsumerRecords<String, String> records = consumerB.poll(Duration.ofMillis(100));
                    for (ConsumerRecord<String, String> record : records) {
                        System.out.println(record.topic());
                        System.out.println("======================consumerB==========================");
                        System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
                        System.out.println("=======================kafka-topic=========================");
                    }
                }
            }
        }).start();

    }


    @Override
    public void afterPropertiesSet() throws Exception {
        initConsumer();
    }
}

再次执行上面的测试

    @Test
    public void testSendMessage() throws Exception{
        String destination = "kafka-topic";
        String message = "hello this is kafka message";
        produceService.sendMessage(destination,message);
        Thread.sleep(1000000);
    }

测试结果

consumerA与consumerB都消费了kafka-topic队列

将该项目打包,分别发布在两台服务器上。再次测试,就会发现consumerA与consumerB任然只会消费一次。避免了集群环境下同一个topic被多次消费的问题(activeMq需要使用虚拟topic解决)

#=====================================================

spring-kafka   demo

引入依赖

<!-- https://mvnrepository.com/artifact/org.springframework.kafka/spring-kafka -->
    <dependency>
      <groupId>org.springframework.kafka</groupId>
      <artifactId>spring-kafka</artifactId>
      <version>2.5.1.RELEASE</version>
    </dependency>

注意spring版本冲突,这里spring-kafka版本为2.5.1(对应kafka-client版本为2.5.0)。我的spring-boot版本之前为2.0.8,启动失败。将spring-boot版本修改为2.3.7.RELEASE后启动成功

当spring-boot版本高于2.2.x时,低版本的idea测试类启动失败,是因为junit版本不一致。spring-boot 2.2.x及以上默认使用junit5,而idea默认使用junit4。(我的2017.2版本启动测试类失败)

解决冲突需要将spring-boot-test中junit-api移除。如下

    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-test</artifactId>
      <version>${spring-boot-version}</version>
      <scope>test</scope>
      <exclusions>
        <exclusion>
          <groupId>org.junit.jupiter</groupId>
          <artifactId>junit-jupiter-api</artifactId>
        </exclusion>
      </exclusions>
    </dependency>

application.properties增加kafka配置

#=======================================================KAFKA MQ======================================================#
spring.kafka.bootstrap-servers=192.168.92.128:9092
# 重试次数
spring.kafka.producer.retries=0
# 应答级别:多少个分区副本备份完成时向生产者发送ack确认(可选0、1、all/-1)
spring.kafka.producer.acks=all
# Kafka提供的序列化和反序列化类
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
# 是否自动提交offset
spring.kafka.consumer.enable-auto-commit=true
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
KafkaTemplate
package com.wl.mq.kafka;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.stereotype.Service;

/**
 * Created by Administrator on 2021/3/10.
 */
@Service
public class KafkaTemplateProduceService {

    private KafkaTemplate<String, String> kafkaTemplate;

    @Autowired
    public KafkaTemplateProduceService(KafkaTemplate<String,String> kafkaTemplate){
        this.kafkaTemplate = kafkaTemplate;
    }

    public void sendMessage(String destination,String message){
        kafkaTemplate.send(destination,message);
    }

}

listener

package com.wl.mq.kafka;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Service;

/**
 * Created by Administrator on 2021/3/10.
 */
@Service
public class KafkaListenerService {

    @KafkaListener(topics = "kafka-topic",groupId = "consumer_c")
    public void consumerAListener(ConsumerRecord<String, String> record){
        System.out.println(record.topic());
        System.out.println("======================kafka spring consumerC==========================");
        System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
        System.out.println("=======================kafka-topic=========================");
    }

    @KafkaListener(topics = "kafka-topic",groupId = "consumer_d")
    public void consumerBListener(ConsumerRecord<String, String> record){
        System.out.println(record.topic());
        System.out.println("======================kafka spring consumerD==========================");
        System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
        System.out.println("=======================kafka-topic=========================");
    }

}

这里我们照样监听kafka-topic这个队列,不过groupId分别为consumer_c和consumer_d

还是执行上面的测试代码

测试结果如下

猜你喜欢

转载自blog.csdn.net/name_is_wl/article/details/114640193
今日推荐