【Kafka】——实战

说的再多不如直接搞起来,下面直接来

一、安装JDK

这个不说了,做java的都会,但是注意,  选择1.8的版本,然后环境变量JAVA_HOME,不要选择默认的"C:\Program Files\Java\jdk1.8.0_151" , 因为文件夹路径不能有空格,后面可能启动kafka服务出错.

二、zookeeper

这个也不说了,不会的自己百度吧

三、kafka

下载:http://kafka.apache.org/downloads

  1. 选择Binary版本,解压
  2. 编辑文件Kafka配置文件, D:\Tool\kafka_2.11-2.1.1\config\server.properties
  3. 找到并编辑log.dirs=D:\Tool\kafka_2.11-2.1.1\kafka-log, (自定义文件夹)
  4. 找到并编辑zookeeper.connect=localhost:2181。表示本地运行(默认的可以不改)
  5. Kafka会按照默认,在9092端口上运行,并连接zookeeper的默认端口:2181。

四、启动kafka

启动之前一定要先启动zookeeper

①启动Kafka服务器

  新建cmd窗口:
      cd D:\Tool\kafka_2.11-2.1.1 

     .\bin\windows\kafka-server-start.bat .\config\server.properties

②创建主题

    新建cmd窗口:
    cd D:\Tool\kafka_2.11-2.1.1\bin\windows

    kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic ceshi

③创建生产者

    新建cmd窗口:
    cd D:\Tool\kafka_2.11-2.1.1\bin\windows
   kafka-console-producer.bat --broker-list localhost:9092 --topic ceshi

 

④创建消费者

    新建cmd窗口:
    cd D:\Tool\kafka_2.11-2.1.1\bin\windows
    kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic ceshi--from-beginning

 五、SpringBoot+Kafka

1、依赖包

        <dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka</artifactId>
            <version>2.2.6.RELEASE</version>
        </dependency>
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-clients</artifactId>
            <version>2.1.0</version>
        </dependency>
 

 注意:这里有一个很大的坑,因为版本的问题,spring-kafka和kafka-clients的版本一定要按照下图对应。

2、配置

#============== kafka ===================
# 指定kafka 代理地址,可以多个
#spring.kafka.bootstrap-servers=123.xxx.x.xxx:19092,123.xxx.x.xxx:19093,123.xxx.x.xxx:19094
spring.kafka.bootstrap-servers=127.0.0.1:9092
#=============== producer生产者  =======================

spring.kafka.producer.retries=0
# 每次批量发送消息的数量
spring.kafka.producer.batch-size=16384
# 缓存容量
spring.kafka.producer.buffer-memory=33554432

# 指定消息key和消息体的编解码方式
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer

#=============== consumer消费者  =======================
# 指定默认消费者group id
spring.kafka.consumer.group-id=test-app

spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.enable-auto-commit=true
spring.kafka.consumer.auto-commit-interval=100ms

# 指定消息key和消息体的编解码方式
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer

#spring.kafka.consumer.bootstrap-servers=192.168.8.111:9092
#spring.kafka.consumer.zookeeper.connect=192.168.8.103:2181

3、客户端代码

@Component
@Slf4j
public class KafkaClient {
    @Autowired
    private KafkaTemplate<String, Object> kafkaTemplate;

    /**
     * 发送MQ消息(带唯一标识)
     *
     * @param topic    主题
     * @param key      数据唯一标识
     * @param jsonStr  数据
     */
    public void sendMsg(String topic, String key, String jsonStr) {
        ListenableFuture<SendResult<String, Object>> future = kafkaTemplate.send(topic, key, jsonStr);
        future.addCallback(new ListenableFutureCallback<SendResult<String, Object>>() {
            @Override
            public void onFailure(Throwable throwable) {
                log.info("主题{}生产者发送消息失败:{}", topic, throwable.getMessage());
            }

            @Override
            public void onSuccess(SendResult<String, Object> stringObjectSendResult) {
                log.info("主题{}生产者发送消息成功:{}", topic, stringObjectSendResult.toString());
            }
        });
    }


    /**
     * 发送MQ消息
     *
     * @param topic   主题
     * @param jsonStr 数据
     */
    public void sendMsg(String topic, String jsonStr) {
        ListenableFuture<SendResult<String, Object>> future = kafkaTemplate.send(topic, jsonStr);
        future.addCallback(new ListenableFutureCallback<SendResult<String, Object>>() {
            @Override
            public void onFailure(Throwable throwable) {
                log.info("主题{}生产者发送消息失败:{}", topic, throwable.getMessage());
            }

            @Override
            public void onSuccess(SendResult<String, Object> stringObjectSendResult) {
                log.info("主题{}生产者发送消息成功:{}", topic, stringObjectSendResult.toString());
            }
        });
    }

}

4、消费者

@Component
@Slf4j
public class TestListener {
    @KafkaListener(topics = {"test"})
    public void receive(ConsumerRecord<?, ?> record) {
        log.info("消费主题test,key:{}", record.key());
        log.info("消费主题test,value:{}", record.value().toString());
        // TODO: 2020/1/8 接下来处理你的业务逻辑
    }

}

5、测试代码

@Data
public class Foo {
    private Integer id;
    private Integer age;
}



@RestController
@RequestMapping("/kafka")
@Slf4j
public class KafkaController {
    @Autowired
    private KafkaClient kafkaClient;


    @GetMapping("/test1")
    public void test1(){
        log.info("开始测试kafka发送消息");
        Foo foo = new Foo();
        foo.setId(333);
        foo.setAge(444);
        kafkaClient.sendMsg("test",new Gson().toJson(foo));
        log.info("结束测试kafka发送消息");
    }
}



2020-01-08 17:00:50.361 [http-nio-8088-exec-3] INFO  com.example.demo.controller.KafkaController - 开始测试kafka发送消息
2020-01-08 17:00:51.880 [http-nio-8088-exec-3] INFO  org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: 
	acks = 1
	batch.size = 16384
	bootstrap.servers = [127.0.0.1:9092]
	buffer.memory = 33554432
	client.dns.lookup = default
	client.id = 
	compression.type = none
	connections.max.idle.ms = 540000
	delivery.timeout.ms = 120000
	enable.idempotence = false
	interceptor.classes = []
	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
	linger.ms = 0
	max.block.ms = 60000
	max.in.flight.requests.per.connection = 5
	max.request.size = 1048576
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 0
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.timeout.ms = 60000
	transactional.id = null
	value.serializer = class org.apache.kafka.common.serialization.StringSerializer

2020-01-08 17:00:51.945 [http-nio-8088-exec-3] INFO  org.apache.kafka.common.utils.AppInfoParser - Kafka version : 2.1.0
2020-01-08 17:00:51.946 [http-nio-8088-exec-3] INFO  org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : eec43959745f444f
2020-01-08 17:00:51.956 [kafka-producer-network-thread | producer-1] INFO  org.apache.kafka.clients.Metadata - Cluster ID: 2hor2CFCQo6A7eXxQ8DZ9A
2020-01-08 17:00:55.991 [http-nio-8088-exec-3] INFO  com.example.demo.controller.KafkaController - 结束测试kafka发送消息
2020-01-08 17:00:55.994 [kafka-producer-network-thread | producer-1] INFO  com.example.demo.common.kafka.KafkaClient - 主题test生产者发送消息成功:SendResult [producerRecord=ProducerRecord(topic=test, partition=null, headers=RecordHeaders(headers = [], isReadOnly = true), key=null, value={"id":333,"age":444}, timestamp=null), recordMetadata=test-0@3]
2020-01-08 17:00:56.045 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] INFO  com.example.demo.common.kafka.KafkaConsumer - 消费主题test,key:null
2020-01-08 17:00:56.045 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] INFO  com.example.demo.common.kafka.KafkaConsumer - 消费主题test,value:{"id":333,"age":444}

参考:https://blog.csdn.net/Black1499/article/details/90474929

https://www.cnblogs.com/coloz/p/10487679.html

发布了170 篇原创文章 · 获赞 64 · 访问量 19万+

猜你喜欢

转载自blog.csdn.net/hy_coming/article/details/103895023
今日推荐