Detailed tutorial on installing Kafka/Zookeeper in Linux environment (Centos7)

Detailed tutorial on installing Kafka/Zookeeper in Linux environment (Centos7)

Stand-alone version installation tutorial


1. Install Zookeeper

1.1 Installation steps

Create folder

mkdir -p /usr/local/zookeeper

Go to folder

cd /usr/local/zookeeper

Download image online

wget --no-check-certificate  https://mirrors.aliyun.com/apache/zookeeper/zookeeper-3.5.9/apache-zookeeper-3.5.9-bin.tar.gz

If the zookeeper version here is invalid, go to the official website to download the latest version and
unzip the file.

tar -zxvf apache-zookeeper-3.5.9-bin.tar.gz
#修改文件名 
mv apache-zookeeper-3.5.9-bin zookeeper-3.5.9-bin
#进入文件目录
cd zookeeper-3.5.9-bin/

After entering the decompressed folder, create a data folder to store data files; create a logs folder to store logs:

mkdir data
mkdir logs

Create the configuration file zoo.cfg

vim conf/zoo.cfg
tickTime = 2000
dataDir = /usr/local/zookeeper/zookeeper-3.5.9-bin/data
dataLogDir = /usr/local/zookeeper/zookeeper-3.5.9-bin/logs
tickTime = 2000
clientPort = 2181
initLimit = 5
syncLimit = 2

Use the command vim conf/zoo.cfg to create a configuration file and open it. Although there is a zoo_sample.cfg sample configuration file in the folder, we still create a new one. This ends the installation

1.2 Commonly used commands are as follows:

Start the service:

/usr/local/zookeeper/zookeeper-3.5.9-bin/bin/zkServer.sh start

Started successfully

Check service status:

/usr/local/zookeeper/zookeeper-3.5.9-bin/bin/zkServer.sh status

Out of service:

/usr/local/zookeeper/zookeeper-3.5.9-bin/bin/zkServer.sh stop

2. Install Kafka

1.1 Installation steps

download

 wget https://mirrors.tuna.tsinghua.edu.cn/apache/kafka/2.7.2/kafka_2.12-2.7.2.tgz --no-check-certificate

Unzip

 tar -zxvf kafka_2.12-2.7.2.tgz 

Enter the directory and modify server.properties:

cd kafka_2.12-2.7.2
vim config/server.properties

Add the following configuration after broker.id= 0:

advertised.listeners=PLAINTEXT://192.168.29.128:9092

Replace 192.168.29.128 here with your actual server IP, and the port number here defaults to 9092

1.2 Common commands

start up:

/usr/local/kafka/kafka_2.12-2.7.2/bin/kafka-server-start.sh -daemon /usr/local/kafka/kafka_2.12-2.7.2/config/server.properties

stop:

/usr/local/kafka/kafka_2.12-2.7.2/bin/kafka-server-stop.sh

1.3 Open ports

firewall-cmd --zone=public --add-port=2181/tcp --permanent  #工具连接
firewall-cmd --zone=public --add-port=9092/tcp --permanent  #默认端口 
firewall-cmd --reload

1.4 Writing script to start

Zookeeper must be started before kafka starts

Create startup script

 vim kafkastart.sh
  #!/bin/sh
  #启动zookeeper
  /usr/local/zookeeper/zookeeper-3.5.9-bin/bin/zkServer.sh start
  echo "zookeeper start success"
  sleep 5
  #启动kafka
  /usr/local/kafka/kafka_2.12-2.7.2/bin/kafka-server-start.sh -daemon /usr/local/kafka/kafka_2.12-2.7.2/config/server.properties
  echo "kafka start success"

Create a stop script

 vim kafkastop.sh
 创建启动脚本

```java
 vim kafkastart.sh
  #!/bin/sh
  #停止zookeeper
  /usr/local/zookeeper/zookeeper-3.5.9-bin/bin/zkServer.sh stop
  echo "zookeeper stop success"
  sleep 5
  #停止kafka
  /usr/local/kafka/kafka_2.12-2.7.2/bin/kafka-server-stop.sh
  echo "kafka stop success"

1.5 Set the script to automatically execute at startup

vim /etc/rc.local   #编辑,在最后添加一行
sh /usr/local/kafka/kafkastart.sh & #设置开机自动在后台运行脚本

At this point, the stand-alone installation and configuration of Kafka under Linux is completed.

3. Springboot integrates kafka to realize message sending and consumption

  1. Add dependencies
<!--kafka -->
<dependency>
    <groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka</artifactId>
</dependency>
  1. Configuration file
#====================== kafka ===================
# 自定义 topics  ,用于实现订阅-消费模式
spring.kafka.consumer.topic=earlyWarning
# 指定kafka 代理地址,可以多个 逗号隔开
spring.kafka.bootstrap-servers=101.201.235.150:9092
# 消费监听接口监听的主题不存在时,默认会报错。所以通过设置为 false ,解决报错
spring.kafka.listener.missing-topics-fatal=false
# 指定listener容器中线程数,用于提高并发量
#spring.kafka.listener.concurrency=3
#kafka 的每次调用来自哪个应用
#spring.kafka.client-id=kafka001

#=============== provider 生产者 =======================

# 指定消息key和消息体的编解码方式
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
#当有多个消息需要被发送到同一个分区时,生产者会把它们放在同一个批次里。该参数指定了一个批次可以使用的内存大小,按照字节数计算。
spring.kafka.producer.batch-size=65536
## 设置生产者内存缓冲区的大
spring.kafka.producer.buffer-memory=524288

#=============== consumer  消费者=======================
#设置一个默认组
spring.kafka.consumer.group-id=group-001
#key-value序列化反序列化
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
#//指定消息被消费之后自动提交偏移量(即消息的编号,表示消费到了哪个位置,消费者每消费完一条消息就会向kafka服务器汇报自己消消费到的那个消息的编号,以便于下次继续消费)。
spring.kafka.consumer.enable-auto-commit=true
#从最近的地方开始消费  广播模式,一般情况下,无需消费历史的消息,从订阅的 Topic 的队列的尾部开始消费即可
spring.kafka.consumer.auto-offset-reset=latest

3.Producer

/**
 *
 * @Description: TODO 生产者配置信息
 * @param
 * @return
 * @author sxq
 * @date 2021/12/20 15:18
 */
@Component
public class KafkaSender {
    
    
    @Autowired
    private KafkaTemplate<String,String> kafkaTemplate;

    /**
     * 发送消息到kafka 同步发送
     */
    public void sendMessageSync(String channel, String message) throws InterruptedException, ExecutionException, TimeoutException {
    
    
        SendResult<String, String> stringStringSendResult = kafkaTemplate.send(channel, message).get(10, TimeUnit.SECONDS);
        System.out.println(stringStringSendResult);
    }

     /**
     * producer 异步方式发送数据
     * @param topic    topic名称
     * @param message  producer发送的数据
     */
    public void sendMessageAsync(String topic, String message) {
    
    
        ListenableFuture<SendResult<String, String>> future = kafkaTemplate.send(topic, message);
        future.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {
    
    
            @Override
            public void onSuccess(SendResult<String, String> result) {
    
    
                System.out.println("success:"+result);
            }

            @Override
            public void onFailure(Throwable ex) {
    
    
                System.out.println("failure:"+ex);
            }
        });
    }
}

4.Consumers

/**
 *
 * @Description: TODO 消费者
 * @author sxq
 * @date 2021/12/20 15:19
 */
@Component
public class KafkaConsumer {
    
    
    /**
     * 监听seckill主题,有消息就读取
     * @param message
     */
    @KafkaListener(topics = {
    
    "#{'${spring.kafka.consumer.topic}'}"},groupId = "${spring.kafka.consumer.group-id}")
   public void consumerGroup1(ConsumerRecord<?, ?> record){
    
    
        Optional message = Optional.ofNullable(record.value());
        if(message.isPresent()){
    
    
            Object msg = message.get();
            System.out.println("consumerGroup1 消费了: Topic:" + record.topic() + ",Message:" + msg);
        }
    }

    @KafkaListener(topics = {
    
    "earlyWarning11"},groupId = "001")
    public void receiveMessageUser(String message){
    
    
        //收到通道的消息之后执行秒杀操作
		System.out.println("user:"+message);
    }
}

Supongo que te gusta

Origin blog.csdn.net/qq_38055805/article/details/122058460
Recomendado
Clasificación