Detailed steps to build a Kafka cluster

Kafka cluster construction

1. The installation of Kafka requires a java environment. Cent os 7 comes with java1.6 version, you can directly use the built-in jdk without reinstalling it; if you think the jdk version is too old, you can reinstall it yourself;

2. Prepare the kafka installation package, the official website download address: 
http://kafka.apache.org/downloads.html

3. After downloading the kafka installation package, unzip it to the /usr/local directory and delete the compressed package

4. At present, a kafka cluster with three nodes has been built, on servers 10.10.67.102, 10.10.67.104 and 10.10.67.106 respectively;

5. View the configuration file and 
enter the config directory of kafka:

write picture description here

6. First establish the zk cluster, directly use the zookeeper that comes with kafka to establish the zk cluster, and modify the zookeeper.properties file:

write picture description here

The zookeeper.properties file configuration on the three machines is the same. It should be noted that the path for saving the log will not be automatically generated. You need to manually create the relevant path. The dataLogDir is added by myself. There are too many log files, so distinguish the log files open;

7. Create a myid file, enter /usr/local/kafka/zookeeper, create a myid file, and write the myid files on the three servers into 1, 2, and 3, respectively, as shown in the figure:

write picture description here 
--myid is the identifier used by the zk cluster to discover each other, it must be created and cannot be the same;

8. Enter the kafka directory and execute the start zookeeper command: 
./bin/zookeeper-server-start.sh config/zookeeper.properties & 
all three machines execute the start command, check the zookeeper log file, if no error is reported, the zookeeper cluster is successfully started .

9. Build a kafka cluster and modify the server.properties configuration file: 
write picture description here

write picture description here

The modification of the server.properties configuration file is mainly at the beginning and the end, and the default configuration can be maintained in the middle; the point to note is that the value of broker.id needs to be configured with different values ​​for the three nodes, which are configured as 0, 1, and 2; log. dirs must ensure that the directory exists and will not be automatically generated according to the configuration file;

10. Start the kafka cluster, enter the kafka directory, and execute the following command: 
./bin/kafka-server-start.sh –daemon config/server.properties & 
all three nodes must be started; no error is reported, that is, the construction is successful and can be produced And consume messages to check whether the build is successful.

11. How to produce and consume messages, please see the next blog: 
http://blog.csdn.net/zxy987872674/article/details/72493128

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325140130&siteId=291194637