Kafka cluster deployment

#environmental information

server1 ip:172.17.0.2

server2 ip:172.17.0.3

server3 ip:172.17.0.4

 

#install jre

All 3 machines need to be installed

Download address: http://www.oracle.com/technetwork/java/javase/downloads/jre8-downloads-2133155.html

$ mkdir /usr/local/java

$ tar -xvf jre-8u161-linux-x64.tar -C /usr/local/java/

#Set environment variables

$ vim /etc/profile

export JAVA_HOME=/usr/local/java/jre1.8.0_161

export PATH=$PATH:$JAVA_HOME/bin

#Configuration takes effect

$ source /etc/profile

 

 

#install cluster

#For the convenience of testing, use the zookeeper that comes with kafka

kafka download address: http://kafka.apache.org/downloads

 

#Unzip kafka to /usr/local/kafka

 

#Set the configuration file /etc/profile file and increase the path of kafka

$ vim /etc/profile

export KAFKA_HOME=/usr/local/kafka

export PATH=$PATH:$KAFKA_HOME/bin

 

#Configuration file takes effect immediately

$ source /etc/profile

 

#Change setting

$ cd /usr/local/kafka/config/

$ vim server.properties

1. Modify broker.id

broker.id=0 #There cannot be duplicate brokers in the cluster

2. Modify listeners

listeners=PLAINTEXT://172.17.0.4:9092

 

log.dirs=/tmp/kafka-logs #The storage address of kafka data. If there are multiple addresses, separate them with commas. Distributing multiple directories on different disks can improve read and write performance.

log.retention.hours=168 #How long is the data file retained, if the maximum storage time exceeds this time, the data cleaning policy will be set according to log.cleanup.policy

 

3. Modify zookeeper.connect

zookeeper.connect=172.17.0.2:2181,172.17.0.3:2181,172.17.0.4:2181

 

#Note the auto.create.topics.enable field. If it is true, if the producer writes a topic that does not exist, it will automatically create the topic. If it is false, it needs to be created in advance or an error will be reported: failed after 3 retries.

 

#Copy the configured kafka of server1 to server2 and server3

$ scp -rp kafka [email protected]:/usr/local/

$ scp -rp kafka [email protected]:/usr/local/

 

#Modify the broker.id in the server2 configuration file to 1, and the server3 host to 2

 

#server1, server2 and server3 start zookeeper and kafka respectively

#start zookeeper

$ bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties &

#stop zookeeper

$ bin/zookeeper-server-stop.sh

 

#start kafka

$ bin/kafka-server-start.sh /usr/local/kafka/config/server.properties &

#stop kafka

$ bin/kafka-server-stop.sh

 

#test cluster

Create a topic named test on the server1 host

$ bin/kafka-topics.sh --create --zookeeper 172.17.0.2:2181 --replication-factor 1 --partitions 1 --topic test

 

Create consumers on server2 and server3 hosts

$ bin/kafka-console-consumer.sh --zookeeper  172.17.0.3:2181 --topic test --from-beginning

$ bin/kafka-console-consumer.sh --zookeeper  172.17.0.4:2181 --topic test --from-beginning

 

#create producer on server1 host

$ bin/kafka-console-producer.sh --broker-list 172.17.0.2:9092 --topic test

 

 

Enter message in the terminal on the #server1 host, and then go to the terminals of the server2 and server3 hosts to view

 

 

#delete topic

Delete the kafka storage directory (server.properties file log.dirs configuration, the default is "/tmp/kafka-logs") related topic directory

 

 

 

Question 1:

WARN [Consumer clientId=consumer-1, groupId=console-consumer-950] Connection to node -1 could not be established. Broker may not be available.

Solution:

This is because the PLAINTEXT in the configuration file is different from what you requested. For example, listeners=PLAINTEXT://172.17.0.2:9092 configured in the configuration file, but the request is ./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic topic1 --from -beginning

The correct one should be ./kafka-console-consumer.sh --bootstrap-server172.17.0.2:9092 --topic topic1 --from-beginning

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326185175&siteId=291194637