Super simple installation and deployment instance of Kafka under Windows + two integrated instances of Kafka under Springboot (producer/consumer instance)

Briefly explain what kafka is

Apache kafka is a kind of message middleware. I found that many people don’t know what message middleware is. Before I start learning, I will briefly explain what message middleware is. It’s just a rough explanation. Kafka has already 更多Things that can be done .

For example, the producer consumer, the producer produces eggs, the consumer consumes eggs, the producer produces an egg, the consumer consumes an egg, suppose the consumer chokes when consuming the egg (the system is down), the production If the worker is still producing eggs, the newly produced eggs are lost. Another example is that the producer is very strong (in the case of a large transaction volume), the producer produces 100 eggs per second, and the consumer can only eat 50 eggs per second. Then, after a while, the consumer will not be able to eat it (the message is blocked) , Eventually causing the system to time out), consumers refuse to eat any more, and the "eggs" are lost again. At this time, we put a basket among them, and all the produced eggs are put in the basket. Consumers go to the basket to get the eggs. The eggs will not be lost, they are all in the basket, and this basket is "kafka".
Eggs are actually "data streams", and the interaction between systems is transmitted through "data streams" (that is, tcp, https, etc.), which are also called messages or "messages".
The message queue is full, in fact, the basket is full, and the "eggs" can't fit anymore, so hurry up and put a few more baskets, which is actually an expansion of Kafka.
Everyone now knows what Kafka does, it is the "basket".

Kafka noun explanation

Later, you will see some terms about kafka, such as topic, producer, consumer, broker, let me explain briefly.

producer: Producer, it is it to produce "eggs".

consumer: Consumers, the "eggs" they produce are consumed.

topic: You understand it as a label. Every egg produced by a producer puts a label (topic). Consumers don’t eat all the “eggs” produced by different producers. In this way, the “eggs” produced by different producers, Consumers can "eat" selectively.

broker: It's the basket.

Everyone must learn to think abstractly. The above is only a business perspective. From a technical perspective, the topic tag is actually a queue. The producer puts all the "eggs (message)" in the corresponding queue, and the consumer goes to the designated Take it from the queue.


Author: Orc
link: https: //www.orchome.com/kafka/index
Source: OrcHome

The above is an introduction to OrcHome's Kafka Chinese tutorial. I think it is very vivid, much easier to understand than some professional terms, and suitable for beginners.

1. Kafka installation and deployment example under Windows

1. First download Apache Kafka

http://kafka.apache.org/downloads

Note the link below

2. Unzip the compressed package

3. Then enter the kafka_2.12-2.3.0\bin\windows folder. And enter cmd enter in the address bar to open the command line

PS: Each of the following 5 commands needs to open a new cmd for execution, and do not close it (except for the creation of topic commands, which can be closed)

4. First start zookeeper, enter the command: .\zookeeper-server-start.bat ..\..\config\zookeeper.properties

Start Kafka and enter the command: .\kafka-server-start.bat ..\..\config\server.properties

Then create a topic and name it test (I have already built it here, so I will use another topic in the picture), and enter the command: .\kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

Then create one or more producers, set topic to test, enter the command: .\kafka-console-producer.bat --broker-list localhost:9092 --topic test

Finally create one or more consumers, set the topic to test, enter the command: .\kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic test --from-beginning

The messages sent on the survivor can be received by consumers under the same topic. Congratulations you have completed the simple Kafka producer/consumer example.

Next, we will implement two simple producer/consumer instances of SpringBoot integrated Kafka.

2. Kafka integration example under Springboot

First introduce the first one, the introduction of dependency is org.apache.kafka. This method is a method in kafka official documents, and the code is more complicated.

Let me talk about the problem I encountered earlier, the SB. I thought that I would not need to start Kafka by myself after introducing the dependency, so Kafka has not been started, which has been unsuccessful. In fact, you need to start Kafka first. The following code just creates instances of consumers and survivors, and does not include Kafka startup.

The default port of zookeeper: 2181

Kafka's default port: 9092

1. First introduce the dependency of org.apache.kafka in pom.xml

        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka_2.12</artifactId>
            <version>1.1.0</version>
        </dependency>

It should be noted here that different versions of the dependent Producer API and Consumer API implementations may be slightly different, my springboot version is 1.5.10.RELEASE, you can check the difference on the official website link http://kafka.apache.org/documentation/ Version of the API instance.

2. Start to implement the producer instance

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Properties;

/**
 * @author ZZJ
 * @description:
 * @date 2019-10-10 15:23
 */
public class MyKafkaProducer {
    public static void main(String[] args) {
        Properties props = new Properties();
        props.put("bootstrap.servers", "localhost:9092"); //kafka端口
        props.put("acks", "all");
        props.put("retries", 0);
        props.put("batch.size", 16384);
        props.put("linger.ms", 1);
        props.put("buffer.memory", 33554432);
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); //key值序列化配置
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); //value值序列化配置

        Producer<String, String> producer = new KafkaProducer<>(props);
        producer.send(new ProducerRecord<>("test", "hello to MykafkaComsumer")); //topic和对应消息

        producer.close();
    }
}

Run the main function to produce a message to the topic named test

3. Start to implement consumer examples

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

import java.util.Arrays;
import java.util.Properties;

/**
 * @author ZZJ
 * @description:
 * @date 2019-10-10 15:48
 */
public class MyKafkaConsumer {
    public static void main(String[] args) {

        Properties props = new Properties();
        props.put("bootstrap.servers", "localhost:9092");
        props.put("group.id", "myGroup"); 分组名
        props.put("enable.auto.commit", "true");
        props.put("auto.commit.interval.ms", "1000");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        consumer.subscribe(Arrays.asList("test")); //这边是消费者的topic订阅,可订阅多个
        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(100); //循环拉取消息
            for (ConsumerRecord<String, String> record : records)
                System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
        }
    }
}

Run the main function to monitor the subscribed topic in a loop

 

Then there is the second one, to introduce a dependency on spring.kafka. The code of this method is very simple. Consumers can be monitored through @kafkalistener.

1. First introduce the dependency of org.springframework.kafka in pom.xml

        <dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka</artifactId>
            <version>1.1.7.RELEASE</version>
        </dependency>

2. Add Kafka configuration in application.properties file

spring.kafka.bootstrap-servers=localhost:9092
spring.kafka.consumer.group-id=myGroup

3. Start to implement the producer instance

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.stereotype.Service;

/**
 * @author ZZJ
 * @description:
 * @date 2019-10-10 11:09
 */
@Service
public class KafkaProducer2 {
    @Autowired
    KafkaTemplate kafkaTemplate;

    public void sender(String topic,String value){
        kafkaTemplate.send(topic,value);
    }
}

The producer sends messages through the KafkaTemplate class

4. Start to implement consumer examples

import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Service;

/**
 * @author ZZJ
 * @description:
 * @date 2019-10-10 11:32
 */
@Service
public class KafkaConsumer2 {
    @KafkaListener(topics = "test")
    public void consumer(String message){
        System.out.println(message);
    }
}

The @KafkaListener annotation here can monitor the topic

5. Create a test class, inject the producer and consumer classes

import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;

@RunWith(SpringRunner.class)
@SpringBootTest
public class KafkaProducer2Test {

    @Autowired
    private KafkaProducer2 kafkaProducer2; //注入生产者
    @Autowired
    private KafkaConsumer2 kafkaConsumer2; //注入消费者,注入后即可监听

    @Test
    public void sender() {
        try { 
            Thread.sleep(3000); //这里sleep保证消费者开始监听后才发送消息
            kafkaProducer2.sender("test","8888----7777");//生产者发送消息
            Thread.sleep(80000000); //sleep方便消费者监听
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
    }
}

The producers and consumers of the above three instances can exchange messages, and can open multiple instances for testing at the same time. These are the integrations after I have studied multiple blogs, suitable for beginners to understand and quickly implement Kafka.

 

 

Guess you like

Origin blog.csdn.net/qq_35530005/article/details/102484310