Mac builds kafka cluster

We have completed the single-point kafka before. Today we will complete the construction of the kafka cluster.

First set up multiple brokers

First create a configuration file for each broker.

cp config/server.properties config/server-1.properties 
cp config/server.properties config/server-2.properties

Then edit these new configuration files and set new properties, otherwise conflicts will occur:

config/server-1.properties: 
    broker.id=1 
    listeners=PLAINTEXT://:9093 
    log.dir=/tmp/kafka-logs-1

config/server-2.properties: 
    broker.id=2 
    listeners=PLAINTEXT://:9094 
    log.dir=/tmp/kafka-logs-2

broker.id is the unique and permanent name of each node in the cluster.

Modifying the port and log directory is to prevent the broker from registering on the same port and overwriting each other's data, because now we are running on the same machine.

Start node

Start zookeeper:

bin/zkServer.sh start

Start the zookeeper node

bin/kafka-server-start.sh config/server-0.properties
bin/kafka-server-start.sh config/server-1.properties
bin/kafka-server-start.sh config/server-2.properties

Create topic

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic test1

Set backup to 3

So far, we have a cluster. Let’s take a look at what each broker is doing.

bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test1
Topic:my-replicated-topic    PartitionCount:1    ReplicationFactor:3    Configs:
Topic: my-replicated-topic    Partition: 0    Leader: 1    Replicas: 1,2,0    Isr: 1,2,0

The first line is a summary of all partitions, with each line providing one partition information.

Because we only set up one partition, there is only one row.

Leader : Responsible for all reads and writes of the partition. The leader of each node is randomly selected.

Replicas : The list of backup nodes, regardless of whether the node is the leader or is currently still, it is just displayed.

Isr : list of nodes that are synchronized and backed up, that is, nodes that are alive and are synchronizing the leader

Create producers and consumers

producer

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test1

consumer

bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test1 --from-beginning

At this time, whatever you write in the producer's window will be displayed in the consumer's window.

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test1
>1
>test1
>love
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test1 --from-beginning
1
test1
love

Test cluster fault tolerance

Next, we kill the leader. From the previous describe, we know that the leader is broker1

ps | grep server-1.properties

46987 ttys002    0:33.99 /usr/bin/java -Xmx1G -Xms1G -server -XX:+UseG1GC -XX.........

kill -9 46987

One of the remaining two backup nodes will become the new leader, and broker1 is no longer synchronizing the backup set.

bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test1
Topic:test1	PartitionCount:1	ReplicationFactor:3	Configs:
Topic: test1	Partition: 0	Leader: 2	Replicas: 1,2,0	Isr: 2,0

Next, re-create the consumer and you will find that the messages in the original message queue are not lost.

bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic test1
1
test1
love

At this point, the simple kafka cluster is set up! ! !

Tip : When you encounter an error in zookeeper or kafka, you can go to the .out file to see the reason for the error. I didn’t know about it before. I really encountered a mistake. I searched one by one online and tried my luck. It took a lot of time.

Guess you like

Origin blog.csdn.net/weixin_43589025/article/details/116304151