[Kafka Progressive Series 003] Build Kafka cluster in Docker environment

In the previous section [Kafka Advanced Series 002] Kafka Installation and Startup and Message Sending in a Docker Environment , we have demonstrated how to install and start Kafka in Docker, and successfully tested the process of sending and receiving Kafka messages.

In the actual production environment, Kafka is deployed in clusters. The common architecture is as follows:
Insert picture description here

The Kafka cluster is composed of multiple Brokers, and each Broker corresponds to a Kafka instance. Zookeeper is responsible for managing the leader election of the Kafka cluster and reblance operations when the Consumer Group changes.

This article will demonstrate how to build a Zookeeper + Kafka cluster in a Docker environment.

Through this article, you will learn:

  • How to use Docker to build a Kafka cluster;
  • How to use Docker-compose to build Kakfa single-node and cluster services with one click;
  • Use to docker-compose down -vsolve the problem of multiple partitions when creating topic initialization by docker-compose;

One, Kafka cluster construction

1. Run Zookeeper first (ZK cluster is not built in this article):

docker run -d --name zookeeper -p 2181:2181 -t wurstmeister/zookeeper

2. Create 3 Kafka nodes respectively and register them on ZK:

Different Kafka nodes only need to change the port number.

Kafka0 :

docker run -d --name kafka0 -p 9092:9092 -e KAFKA_BROKER_ID=0 -e KAFKA_ZOOKEEPER_CONNECT=192.168.0.104:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.0.104:9092 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 -t wurstmeister/kafka

Kafka1 :

docker run -d --name kafka1 -p 9093:9093 -e KAFKA_BROKER_ID=1 -e KAFKA_ZOOKEEPER_CONNECT=192.168.0.104:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.0.104:9093 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9093 -t wurstmeister/kafka

Kafka2 :

docker run -d --name kafka2 -p 9094:9094 -e KAFKA_BROKER_ID=2 -e KAFKA_ZOOKEEPER_CONNECT=192.168.0.104:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.0.104:9094 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9094 -t wurstmeister/kafka

Note: The above nodes need to be replaced with their own IP.

After starting the 3 Kafka nodes, check whether the startup is successful: In

this way, the Kafka cluster is set up.

3. Create a topic for testing on the Broker 0 node:

Create a topic with replica 3 and partition 5 on Broker 0 for testing.

(All partitions of Kafka topic will be scattered on different Brokers, so the 5 partitions of the topic will be scattered on 3 Brokers, of which two Brokers get two partitions, and the other Broker has only 1 partition. The conclusion is It will be verified below.)

cd /opt/kafka_2.12-2.4.0/bin


kafka-topics.sh --create --zookeeper 192.168.0.104:2181 --replication-factor 3 --partitions 5 --topic TestTopic

Insert picture description here
View the newly created topic information:

kafka-topics.sh --describe --zookeeper 192.168.0.104:2181 --topic TestTopic

Insert picture description here
What does the topic information above mean?
As mentioned above, " 5 partitions of the topic will be distributed to 3 Brokers, of which two Brokers will get two partitions, and the other Broker will only have 1 partition ". Reading this sentence should be able to understand the meaning of the topic information in the figure above.
First of all, it Topic: TestTopic PartitionCount: 5 ReplicationFactor: 3represents that TestTopic has 5 partitions and 3 replica nodes;
Topic: TestTopic Partition: 0 Leader: 2 Replicas: 2,0,1 Isr: 2,0,1
Leader:2the Leader Replica representing partition 0 under TestTopic is on the Broker.id = 2 node, which
Replicasrepresents that its replica node has Broker.id = 2, 0, 1 (including Leader Replica and Follower Replica, regardless of whether they are alive or not), which
Isrmeans that the replicas of the leader node are alive and synchronized. Broker.id = 2, 0, 1
Regarding the replication mechanism, it is not the focus of this section, so I will not elaborate on this article. You can go to learn more.

4. Kafka cluster verification

In the previous step, a topic: TestTopic was created on Broker0, and then two other windows were opened and entered the Kafka1 and Kafka2 containers respectively to check whether the two
Insert picture description here
topics have been synchronized in the two containers: You can see that Kafka1 and Kafka2 have been synchronized. Topic created.

Next, run a producer on Broker0, and a consumer on Broker1 and 2:

kafka-console-producer.sh --broker-list 192.168.0.104:9092 --topic TestTopic

kafka-console-consumer.sh --bootstrap-server 192.168.0.104:9093 --topic TestTopic --from-beginning

kafka-console-consumer.sh --bootstrap-server 192.168.0.104:9094 --topic TestTopic --from-beginning

As shown in the following figure:
Insert picture description here
Send a message on Broker 0 to see if the message can be received normally on Broker 1 and 2:Insert picture description here

Second, use Docker-Compose to build a Kafka cluster

1. What is Docker-Compose?

Docker-Compose is a tool provided by Docker for managing multiple containers under the same application at the same time.

For example, the above steps to build a Kafka cluster in Docker are complicated. For example, first build a ZK container, and then create multiple Kafka containers through commands and start them separately. With Docker-Compose, all services can be started with a single command.

The differences between Docker and Docker-Compose are as follows:
Insert picture description here

2. How to use Docker-Compose

How to use Docker-compose to create Kafka can be found in the link: https://github.com/wurstmeister/kafka-docker.
(1) Create a directory

First, create a directory for storing the docker-compose.yml file in the local path, and create a new file: docker-compose.yml (I created docker-compose-kafka-single-broker.yml)

Note: Solve by yourself if you encounter permission problems
Insert picture description here

(2) Single Broker node
Let's see how to create a single Broker node, and configure the following in the docker-compose-kafka-single-broker.yml file:

version: '3'
services:
  zookeeper:
    image: wurstmeister/zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
  kafka:
    image: wurstmeister/kafka
    ports:
      - "9092:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 192.168.1.202
      KAFKA_CREATE_TOPICS: TestComposeTopic:2:1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_BROKER_ID: 1
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.1.202:9092
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
    container_name: kafka01
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

The meaning of the parameters in the file:

  • version: "3" means the third-generation compose syntax;

  • services: indicates the instance service to be enabled;

  • zookeeper, kafka: the name of the started service;

  • image: the image used by docker;

  • container_name: container name after startup;

  • ports: exported port number;

The parameter information about Kafka is described separately:

  • KAFKA_ADVERTISED_HOST_NAME: Docker host IP, you can set multiple;
  • KAFKA_CREATE_TOPICS: The topic created by default at startup; it TestComposeTopic:2:1means that the created topic is TestComposeTopic, 2 partitions, and 1 copy;
  • KAFKA_ZOOKEEPER_CONNECT: connect to ZK;
  • KAFKA_BROKER_ID:Broker ID;
  • KAFKA_ADVERTISED_LISTENERS and KAFKA_LISTENERS must be present, otherwise they may not work properly.

Once configured, use the command docker-compose -f docker-compose-kafka-single-broker.yml upto start a single node Kafka.

View the started single Broker information and topic information:
Insert picture description here
Insert picture description here

Message sending and receiving verification:
Insert picture description here

(3) Broker cluster

The above uses docker-compose to successfully build a Kafka single-node Broker. Now let's see how to build a Kafka cluster:

First create a new file in the directory: docker-compose-kafka-single-broker.yml, the configuration content is as follows:

version: '3'
services:
  zookeeper:
    image: wurstmeister/zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"

  kafka1:
    image: wurstmeister/kafka
    ports:
      - "9092:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 192.168.1.202
      KAFKA_CREATE_TOPICS: TestComposeTopic:4:3
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_BROKER_ID: 1
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.1.202:9092
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
    container_name: kafka01
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

  kafka2:
    image: wurstmeister/kafka
    ports:
      - "9093:9093"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 192.168.1.202
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_BROKER_ID: 2
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.1.202:9093
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9093
    container_name: kafka02
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

  kafka3:
    image: wurstmeister/kafka
    ports:
      - "9094:9094"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 192.168.1.202
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_BROKER_ID: 3
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.1.202:9094
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9094
    container_name: kafka03
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

Execute the docker-compose -f docker-compose-kafka-cluster.yml upscript: ,

You can see that the startup is successful:
Insert picture description here

Enter 3 containers to view topic information

Message sending verification:

A producer is started on Broker 0, and a consumer is started on Broker 1 and 2, respectively, for message sending and receiving verification:
Insert picture description here

3. Records of difficult and miscellaneous diseases

1. Using KAFKA_CREATE_TOPICS parameter cannot create multiple topic partitions

The parameters in the configuration file are KAFKA_CREATE_TOPICS: TestComposeTopic:2:1intended to create topic: TestComposeTopic, 2 partitions, 1 copy, but after the actual execution, the topic can be successfully executed, but the partition is still 1:
Insert picture description here

Baidu turned over a few pages of similar problems, but did not find a solution (here, Baidu was really bad when it actually solved the problem), and then searched on Google to find a solution to the problem on the first page: Can 't create a topic with multiple partitions using KAFKA_CREATE_TOPICS #490 , it seems that someone has encountered it before. Someone has posted a solution below and docker-compose down -vcan be solved by using it .

So, I quickly tried:

Enter the directory where docker-compose.yml is located. Since I am not using the default docker-compose.yml file, I need to add the parameter -f to specify the file I wrote:

cd  /docker/config/kafka

### 执行该命令, 解决只能创建一个分区的问题
docker-compose -f docker-compose-kafka-single-broker.yml down -v

### 重新启动
docker-compose -f docker-compose-kafka-single-broker.yml up

After re-executing docker-compose.yml, check the topic information again and find that 2 partitions have been successfully created:

About docker-compose down -vthe meaning of the command is:

Stops containers and removes containers, networks, volumes, and images
created by up.

By default, the only things removed are:

  • Containers for services defined in the Compose file
  • Networks defined in the networks section of the Compose file
  • The default network, if one is used

Networks and volumes defined as external are never removed.

Why use this command? It may be due to the use of docker-composea topic that creates 1 partition and 1 copy by default before creating a partition. I also asked questions under this issue, hoping to get an answer. If there are students who understand, you can also comment below the article and tell me, I am very grateful.
Insert picture description here

(Link: https://github.com/wurstmeister/kafka-docker/issues/490)

Guess you like

Origin blog.csdn.net/noaman_wgs/article/details/103757791