Kafka installation in linux

Kafka cluster deployment

Overview

This article explains the construction of the following kafka cluster. Two types of Kafka clusters are built here, one is a traditional cluster using ZK, and the other is an experimental version cluster that does not require ZK. Kafka will abandon ZK in the future, but it is still recommended at this stage. With ZK, because the one that does not require ZK is still experimental.

Environmental preparation

JDK installation

kafka3.0 no longer supports JDK8. It is recommended to install JDK11 or JDK17 and install it in advance.

Docker environment installation

We have a Docker installation method later. It is recommended to install docker and docker-compose in advance.

Kafka cluster construction (ZK mode)

zookeeper cluster construction

The zk used here is apache-zookeeper-3.8.0-bin.tar.gz, other versions can be downloaded from the official website zookeeper official website

zookeeper deployment

Prepare three Linux servers, the IP addresses are as follows

node IP
node1 192.168.245.129
node2 192.168.245.130
node3 192.168.245.131
Preparation

Perform the following operations on three servers respectively to build and configure the zookeeper cluster.

Download the installation package

You need to download the installation package on all three machines

mkdir /usr/local/zookeeper && cd /usr/local/zookeeper

wget http://archive.apache.org/dist/zookeeper/zookeeper-3.8.0/apache-zookeeper-3.8.0-bin.tar.gz
Unzip the installation package
tar -zxvf apache-zookeeper-3.8.0-bin.tar.gz
cd apache-zookeeper-3.8.0-bin/
Create a data storage directory
mkdir -p /usr/local/zookeeper/apache-zookeeper-3.8.0-bin/zkdatas
Create configuration file

Copy and create new configuration file

cd /usr/local/zookeeper/apache-zookeeper-3.8.0-bin/conf
cp zoo_sample.cfg zoo.cfg
Edit configuration file
cd /usr/local/zookeeper/apache-zookeeper-3.8.0-bin/conf
vi zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
# Zookeeper的数据存放目录,修改数据目录
dataDir=/usr/local/zookeeper/apache-zookeeper-3.8.0-bin/zkdatas
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
# 保留多少个快照,本来就有这个配置, 只不过注释了, 取消注释即可
autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
# 日志多少小时清理一次,本来就有这个配置, 只不过注释了, 取消注释即可
autopurge.purgeInterval=1

## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpHost=0.0.0.0
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true
# 集群中服务器地址,集群间通讯的端口号,文件末尾直接增加
server.1=192.168.245.129:2888:3888
server.2=192.168.245.130:2888:3888
server.3=192.168.245.131:2888:3888
configuration myidfile

myidThe values ​​in this file are used to correspond to zoo.cfgthe configuration items in the configuration file server.x=nodex:2888:3888and are used to identify the current node zookeeper. zookeeperWhen the cluster is started, leaderthe election is completed.

node1 executes
echo 1 > /usr/local/zookeeper/apache-zookeeper-3.8.0-bin/zkdatas/myid 
node2 execution
echo 2 > /usr/local/zookeeper/apache-zookeeper-3.8.0-bin/zkdatas/myid
node3 execution
echo 3 > /usr/local/zookeeper/apache-zookeeper-3.8.0-bin/zkdatas/myid
Start service

Starting from the first server, execute the following commands in sequence

cd /usr/local/zookeeper/apache-zookeeper-3.8.0-bin
#启动服务
sh bin/zkServer.sh start
# 查看进程
jps
View cluster status
sh bin/zkServer.sh status

Insert image description here

kafka cluster deployment

Deployment planning

First of all, there is currently no cluster construction document on the official page. When we download the installation package, you can view the README.md file under config/kraft. According to the instructions, here I build a Kafka cluster in Kraft mode. Here I use version 3.3.1

HOSTNAME IP OS
Kafka01 192.168.245.129 centos7.9
kafka02 192.168.245.130 centos7.9
kafka03 192.168.245.131 centos7.9
Preparation
download kafka

All three machines need to download Kafka

mkdir /usr/local/kafka/ && cd /usr/local/kafka/
wget https://downloads.apache.org/kafka/3.3.1/kafka_2.12-3.3.1.tgz
Unzip and install

After downloading the kafka installation package, you need to decompress it and create a log directory.

tar -zxvf kafka_2.12-3.3.1.tgz
cd kafka_2.12-3.3.1 && mkdir logs
Configure server.properties

We need to configure the kafka configuration file, which is in the kafka config/server.propertiesdirectory.

kafka01 configuration
vi config/server.properties
#节点ID
broker.id=1
# 本机节点
listeners=PLAINTEXT://192.168.245.129:9092
# 本机节点
advertised.listeners=PLAINTEXT://192.168.245.129:9092
# 这里我修改了日志文件的路径
log.dirs=/usr/local/kafka/kafka_2.12-3.3.1/logs
# zookeeper连接地址
zookeeper.connect=192.168.245.129:2181,192.168.245.130:2181,192.168.245.131:2181
kafka02 configuration
vi config/kraft/server.properties
#节点ID
broker.id=2
# 本机节点
listeners=PLAINTEXT://192.168.245.130:9092
# 本机节点
advertised.listeners=PLAINTEXT://192.168.245.130:9092
# 这里我修改了日志文件的路径
log.dirs=/usr/local/kafka/kafka_2.12-3.3.1/logs
# zookeeper连接地址
zookeeper.connect=192.168.245.129:2181,192.168.245.130:2181,192.168.245.131:2181
kafka03 configuration
vi config/kraft/server.properties
#节点ID
broker.id=3
# 本机节点
listeners=PLAINTEXT://192.168.245.131:9092
# 本机节点
advertised.listeners=PLAINTEXT://192.168.245.131:9092
# 这里我修改了日志文件的路径
log.dirs=/usr/local/kafka/kafka_2.12-3.3.1/logs
# zookeeper连接地址
zookeeper.connect=192.168.245.129:2181,192.168.245.130:2181,192.168.245.131:2181
Start the cluster

You can use the following command to start the cluster

nohup sh bin/kafka-server-start.sh config/server.properties 1>/dev/null 2>&1 &
Create topic
sh bin/kafka-topics.sh --create --topic test --partitions 1 --replication-factor 1 --bootstrap-server 192.168.245.129:9092

Insert image description here

View list of topics
sh bin/kafka-topics.sh --list --bootstrap-server 192.168.245.129:9092  

Insert image description here

View topic information
sh bin/kafka-topics.sh --bootstrap-server 192.168.245.129:9092 --describe --topic test

Insert image description here

Docker deployment

Create data directory
mkdir -p /tmp/kafka/broker{
    
    1..3}/{
    
    data,logs}
mkdir -p /tmp/zookeeper/zookeeper/{
    
    data,datalog,logs,conf}
Zookeeper configuration
Create Zookeeper configuration file
vi /tmp/zookeeper/zookeeper/conf/zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/data
dataLogDir=/datalog
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
autopurge.purgeInterval=1
Create Zookeeper log configuration
vi /tmp/zookeeper/zookeeper/conf/log4j.properties
# Define some default values that can be overridden by system properties
zookeeper.root.logger=INFO, CONSOLE
zookeeper.console.threshold=INFO
zookeeper.log.dir=/logs
zookeeper.log.file=zookeeper.log
zookeeper.log.threshold=DEBUG
zookeeper.tracelog.dir=.
zookeeper.tracelog.file=zookeeper_trace.log

#
# ZooKeeper Logging Configuration
#

# Format is "<default threshold> (, <appender>)+

# DEFAULT: console appender only
log4j.rootLogger=${zookeeper.root.logger}

# Example with rolling log file
#log4j.rootLogger=DEBUG, CONSOLE, ROLLINGFILE

# Example with rolling log file and tracing
#log4j.rootLogger=TRACE, CONSOLE, ROLLINGFILE, TRACEFILE

#
# Log INFO level and above messages to the console
#
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.Threshold=${zookeeper.console.threshold}
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n

#
# Add ROLLINGFILE to rootLogger to get log file output
#    Log DEBUG level and above messages to a log file
log4j.appender.ROLLINGFILE=org.apache.log4j.RollingFileAppender
log4j.appender.ROLLINGFILE.Threshold=${zookeeper.log.threshold}
log4j.appender.ROLLINGFILE.File=${zookeeper.log.dir}/${zookeeper.log.file}

# Max log file size of 10MB
log4j.appender.ROLLINGFILE.MaxFileSize=10MB
# uncomment the next line to limit number of backup files
log4j.appender.ROLLINGFILE.MaxBackupIndex=10

log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n


#
# Add TRACEFILE to rootLogger to get log file output
#    Log DEBUG level and above messages to a log file
log4j.appender.TRACEFILE=org.apache.log4j.FileAppender
log4j.appender.TRACEFILE.Threshold=TRACE
log4j.appender.TRACEFILE.File=${zookeeper.tracelog.dir}/${zookeeper.tracelog.file}

log4j.appender.TRACEFILE.layout=org.apache.log4j.PatternLayout
### Notice we are including log4j's NDC here (%x)
log4j.appender.TRACEFILE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L][%x] - %m%n

Create Docker orchestration file
vi docker-compose.yaml
version: '2'
services:
  zookeeper:
    container_name: zookeeper
    image: wurstmeister/zookeeper
    restart: unless-stopped
    hostname: zoo1
    ##volumes:
      ##- "/tmp/zookeeper/zookeeper/data:/data"
      ##- "/tmp/zookeeper/zookeeper/datalog:/datalog"
      ##- "/tmp/zookeeper/zookeeper/logs:/logs"
      ##- "/tmp/zookeeper/zookeeper/conf:/opt/zookeeper-3.4.13/conf"
    ports:
      - "2181:2181"
    networks:
      - kafka
  kafka1:
    container_name: kafka1
    image: wurstmeister/kafka
    ports:
      - "9091:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 192.168.56.101                   ## 修改:宿主机IP
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.56.101:909  ## 修改:宿主机IP
      KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
      KAFKA_ADVERTISED_PORT: 9091
      KAFKA_BROKER_ID: 1
      KAFKA_LOG_DIRS: /kafka/data
    volumes:
      - /tmp/kafka/broker1/logs:/opt/kafka/logs
      - /tmp/kafka/broker1/data:/kafka/data
    depends_on:
      - zookeeper
    networks:
      - kafka
  kafka2:
    container_name: kafka2
    image: wurstmeister/kafka
    ports:
      - "9092:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 192.168.56.101                   ## 修改:宿主机IP
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.56.101:9092  ## 修改:宿主机IP
      KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_BROKER_ID: 2
      KAFKA_LOG_DIRS: /kafka/data
    volumes:
      - /tmp/kafka/broker2/logs:/opt/kafka/logs
      - /tmp/kafka/broker2/data:/kafka/data
    depends_on:
      - zookeeper
    networks:
      - kafka
  kafka3:
    container_name: kafka3
    image: wurstmeister/kafka
    ports:
      - "9093:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 192.168.56.101                   ## 修改:宿主机IP
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.56.101:9093  ## 修改:宿主机IP
      KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
      KAFKA_ADVERTISED_PORT: 9093
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 3
      KAFKA_MIN_INSYNC_REPLICAS: 2
      KAFKA_BROKER_ID: 3
      KAFKA_LOG_DIRS: /kafka/data
    volumes:
      - /tmp/kafka/broker3/logs:/opt/kafka/logs
      - /tmp/kafka/broker3/data:/kafka/data
    depends_on:
      - zookeeper
    networks:
      - kafka
  kafka-manager:
    image: sheepkiller/kafka-manager              ## 镜像:开源的web管理kafka集群的界面
    environment:
        ZK_HOSTS: 192.168.56.101                 ## 修改:宿主机IP
    ports:
      - "9090:9090"                               ## 暴露端口
    networks:
      - kafka
networks:
  kafka:
    driver: bridge
Start service
docker-compose up -d

Kafka cluster construction (kraft mode)

Why does kafka rely on zookeeper?

Zookeeper is an open source distributed application coordination system used to manage distributed applications. Kafka uses zookeeper to store kafka cluster metadata information, including topic configuration, topic partition location, etc. It can be said that kafka cannot work without zookeeper.

How to make kafka get rid of zookeeper dependency?

There are some problems in using ZooKeeper as Kafka's external metadata management system, such as data duplication, increased deployment complexity, and the need for additional Java processes.

The release of Apache Kafka 3.0.0 paves the way for Kafka to completely remove Zookeeper. Kafka Raft supports snapshots and self-management of metadata topics. The 3.1.0 version was released on 2022.1.24, and the 3.0.0 version was modified.

In order to run kafka without zookeeper, you can run it using kafka Raft metadata mode (KRaft). When the Kafka cluster is in KRaft mode, it stores metadata in the KRaft quorum of the controller node. The metadata will be stored In the internal kafka topic @metadata.

Note that KRaft is currently in an early experimental stage and should not be used in production, but it is available for testing in kafka version 2.8.

Introduction to KRaft

Kafka's consensus mechanism KRaft is still in preview mechanism. In the future, KRaft will replace Zookeeper as the built-in consensus mechanism of Apache Kafka. This model has already released an experience version in version 2.8. In the 3.X series, KRaft is a stable release version.

Kafka clusters in KRaft operating mode do not store metadata in zookeeper. That is, when deploying a new cluster, there is no need to deploy a zk cluster because Kafka stores metadata in the KRaft Quorum of the Controller node. KRAFT can bring many benefits, such as supporting more partitions, switching Controllers more quickly, and avoiding a series of problems caused by inconsistencies between the metadata cached by the Controller and the data stored in zk.

KRaft architecture

First, let’s take a look at the differences between KRaft at the system architecture level and previous versions. The overall architecture of Kafka after the KRaft model is proposed to remove zookeeper is shown in the figure below, which is a comparison of the before and after architecture diagrams:

Insert image description here

In the picture above, black represents Broker (message broker service), and brown/blue represents Controller (cluster controller).

Left picture (kafka2.0)

​ All nodes in a cluster are Broker roles. Zookeeper's election capability is used to elect a Controller controller from three Brokers. At the same time, the controller saves cluster metadata information (such as topic classification, consumption progress, etc.) to zookeeper for Distributed interaction between nodes in the cluster.

Right picture (kafka3.0)

​ Suppose there are four Brokers in a cluster, and the configuration specifies three of them as Conreoller roles (blue). The kraft mechanism is used to implement the election of the controller main controller. One Controller is elected from three Controllers as the main controller (brown), and the other two are backup. zookeeper is no longer needed. Relevant cluster metadata information exists in the form of kafka logs (that is, in the form of message queue messages).

Service deployment

Deployment planning

First of all, there is currently no cluster construction document on the official page. When we download the installation package, you can view the README.md file under config/kraft. According to the instructions, here I build a Kafka cluster in Kraft mode. Here I use version 3.3.1

HOSTNAME IP OS
Kafka01 192.168.245.129 centos7.9
kafka02 192.168.245.130 centos7.9
kafka03 192.168.245.131 centos7.9
Preparation
Install JDK11

kafka3.0 no longer supports JDK8. It is recommended to install JDK11 or JDK17 and install it in advance.

download kafka

All three machines need to download Kafka

mkdir /usr/local/kafka/ && cd /usr/local/kafka/
wget https://downloads.apache.org/kafka/3.3.1/kafka_2.12-3.3.1.tgz
Unzip and install

After downloading the kafka installation package, you need to decompress it and create a log directory.

tar -zxvf kafka_2.12-3.3.1.tgz
cd kafka_2.12-3.3.1 && mkdir logs
Configure server.properties

kraftThe configuration file we need to configure is in config/kraft/server.propertiesthe directory of kafka

kafka01 configuration
vi config/kraft/server.properties
#节点角色
process.roles=broker,controller
#节点ID,和节点所承担的角色想关联
node.id=1
# 集群地址
[email protected]:9093,[email protected]:9093,[email protected]:9093
#本机节点
listeners=PLAINTEXT://192.168.245.129:9092,CONTROLLER://192.168.245.129:9093
#本机节点
advertised.listeners=PLAINTEXT://192.168.245.129:9092
# 这里我修改了日志文件的路径
log.dirs=/usr/local/kafka/kafka_2.12-3.3.1/logs
kafka02 configuration
vi config/kraft/server.properties
#节点角色
process.roles=broker,controller
#节点ID,和节点所承担的角色想关联
node.id=1
# 集群地址
[email protected]:9093,[email protected]:9093,[email protected]:9093
#本机节点
listeners=PLAINTEXT://192.168.245.129:9030,CONTROLLER://192.168.245.130:9093
#本机节点
advertised.listeners=PLAINTEXT://192.168.245.130:9092
# 这里我修改了日志文件的路径
log.dirs=/usr/local/kafka/kafka_2.12-3.3.1/logs
kafka03 configuration
vi config/kraft/server.properties
#节点角色
process.roles=broker,controller
#节点ID,和节点所承担的角色想关联
node.id=1
# 集群地址
[email protected]:9093,[email protected]:9093,[email protected]:9093
#本机节点
listeners=PLAINTEXT://192.168.245.131:9092,CONTROLLER://192.168.245.131:9093
#本机节点
advertised.listeners=PLAINTEXT://192.168.245.131:9092
# 这里我修改了日志文件的路径
log.dirs=/usr/local/kafka/kafka_2.12-3.3.1/logs
Configuration parameter explanation
Process.Roles

Each Kafka server now has a new configuration item called Process.Roles. This parameter can have the following values:

  • If Process.Roles = Broker, the server acts as a Broker in KRaft mode.
  • If Process.Roles = Controller, the server acts as a Controller in KRaft mode.
  • If Process.Roles = Broker,Controller, the server acts as both Broker and Controller in KRaft mode.
  • If process.roles is not set. Then the cluster is assumed to be running in ZooKeeper mode.

As mentioned previously, it is currently not possible to convert back and forth between ZooKeeper mode and KRaft mode without reformatting the directory. Nodes that act as both Broker and Controller are called "combined" nodes.

For simple scenarios, combined nodes are easier to run and deploy, and can avoid the fixed memory overhead associated with the JVM when running multiple processes. The key disadvantage is that the controller will be less isolated from the rest of the system, for example if activity on the agent causes out of memory, the controller part of the server will not be isolated from that OOM condition.

node.id

This will be used as the node ID in the cluster, a unique identifier. According to our pre-planning (above), this value will be different on different servers.

In fact, it is in kafka2.0 broker.id, but in version 3.0, the kafka instance no longer only plays the role of broker, but may also play the role of controller, so it is renamed node.

Whose Voters

All nodes in the system must be configured controller.quorum.voters. This configuration identifies which nodes are Quorum's voter nodes.

​ All nodes that want to become controllers need to be included in this configuration. This is similar to when using ZooKeeper, where all ZooKeeper servers must be included when using ZooKeeper.connect configuration, however, unlike ZooKeeper configuration, the controller.quorum.votersconfiguration needs to include the id of each node. The format is: id1@host1:port1,id2@host2:port2.

Build a KRaft cluster

Generate cluster ID

Generate a unique cluster ID (just do it once on a kafka server). This step does not exist when installing kafka2.0 version

sh bin/kafka-storage.sh random-uuid
k4CR-54TQZajZSvxWkADtQ
format data directory

Use the generated cluster ID + configuration file to format the storage directory log.dirs, so this step confirms that the configuration and path actually exist, and the kafka user has access rights (check whether the preparations are done correctly), each host server must execute this command

sh bin/kafka-storage.sh format -t k4CR-54TQZajZSvxWkADtQ -c config/kraft/server.properties
View meta.properties

log.dirsAfter the formatting operation is completed, you will find that there is an extra meta.propertiesfile in the directory we defined.

meta.propertiesThe file stores the ID of the current kafka node ( node.id) and which cluster the current node belongs to ( cluster.id)

cat logs/meta.properties

Insert image description here

Start the cluster

You can use the following command to start the cluster

nohup sh bin/kafka-server-start.sh config/kraft/server.properties 1>/dev/null 2>&1 &
Create topic
sh bin/kafka-topics.sh --create --topic test --partitions 1 --replication-factor 1 --bootstrap-server 192.168.245.129:9092

Insert image description here

View list of topics
sh bin/kafka-topics.sh --list --bootstrap-server 192.168.245.129:9092  

Insert image description here

View topic information
sh bin/kafka-topics.sh --bootstrap-server 192.168.245.129:9092 --describe --topic test

Docker deployment

Create data directory
mkdir -p /tmp/kafka/broker{
    
    1..3}/{
    
    data,logs}
Create Docker orchestration file
vi docker-compose.yaml
version: "3"
services:
  kafka01:
    container_name: kafka01
    image: bitnami/kafka:3.3.1
    user: root
    ports:
      - '9092:9092'
    environment:
      - KAFKA_ENABLE_KRAFT=yes
      - KAFKA_CFG_PROCESS_ROLES=broker,controller
      - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
      - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093
      - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
      - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://192.168.245.129:9092
      - KAFKA_BROKER_ID=1
      - KAFKA_KRAFT_CLUSTER_ID=LelM2dIFQkiUFvXCEcqRWA
      - KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@kafka01:9093,2@kafka02:9093,3@kafka03:9093
      - ALLOW_PLAINTEXT_LISTENER=yes
    volumes:
      - /tmp/kafka/broker1/data:/bitnami/kafka
    networks:
      - kafka
  kafka02:
    container_name: kafka02
    image: bitnami/kafka:3.3.1
    user: root
    ports:
      - '9192:9092'
    environment:
      - KAFKA_ENABLE_KRAFT=yes
      - KAFKA_CFG_PROCESS_ROLES=broker,controller
      - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
      - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093
      - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
      - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://192.168.245.129:9192
      - KAFKA_BROKER_ID=2
      - KAFKA_KRAFT_CLUSTER_ID=LelM2dIFQkiUFvXCEcqRWA
      - KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@kafka01:9093,2@kafka02:9093,3@kafka03:9093
      - ALLOW_PLAINTEXT_LISTENER=yes
    volumes:
      - /tmp/kafka/broker2/data:/bitnami/kafka
    networks:
      - kafka
  kafka03:
    container_name: kafka03
    image: bitnami/kafka:3.3.1
    user: root
    ports:
      - '9292:9092'
    environment:
      - KAFKA_ENABLE_KRAFT=yes
      - KAFKA_CFG_PROCESS_ROLES=broker,controller
      - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
      - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093
      - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
      - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://192.168.245.129:9292
      - KAFKA_BROKER_ID=3
      - KAFKA_KRAFT_CLUSTER_ID=LelM2dIFQkiUFvXCEcqRWA
      - KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@kafka01:9093,2@kafka02:9093,3@kafka03:9093
      - ALLOW_PLAINTEXT_LISTENER=yes
    volumes:
      - /tmp/kafka/broker3/data:/bitnami/kafka
    networks:
      - kafka
networks:
  kafka:
    driver: bridge
Start service
docker-compose up -d

Guess you like

Origin blog.csdn.net/weixin_44702984/article/details/131605176