Cluster Four (4): Perfect Kafka Cluster Construction

    I wrote an article about the publish-subscribe of Kafka messages before, but it is only based on one server, which is not comprehensive enough. Now I will talk about the construction of the Kafka cluster environment and the publish-subscribe of messages. I hope you like it.

     1. Environment: Virtual machine CentOs7 system, complete environment, please confirm that JDK and Zookeeper have been installed (here can refer to the previous Zookeeper cluster construction: https://my.oschina.net/u/3747963/blog/1635507 ), you can By cloning the configured virtual machine environment, the server used before is still used this time, so the number of nodes used is 3.

    2. Environment configuration (not to mention the decompression of Kafka)

    (1) Enter the config directory and modify the content of the server.properties file as follows:

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0

############################# Socket Server Settings #############################

# The port the socket server listens on
port=9092

# Hostname the broker will bind to. If not set, the server will bind to all interfaces
host.name=slave01

//中间省略,默认配置即可
############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=slave01:2181,slave02:2181,slave03:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000

    The versions on the Internet are not the same, but they have the same goal. There are only a few main modifications: broker.id, zookeeper.connect, note: broker.id and the value in the myid file of the Zookeeper cluster configuration are consistent . At the same time, one place must be modified (this may be ignored by most people, which will cause an error to be reported later), manually modify the meta.properties file, in the configured log.dir directory, if you do not modify it, the following error will be reported here :

FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
    kafka.common.InconsistentBrokerIdException: Configured brokerId 2 doesn't match stored brokerId 1 in meta.properties
            at kafka.server.KafkaServer.getBrokerId(KafkaServer.scala:630)
            at kafka.server.KafkaServer.startup(KafkaServer.scala:175)
            at io.confluent.support.metrics.SupportedServerStartable.startup(SupportedServerStartable.java:99)
            at io.confluent.support.metrics.SupportedKafka.main(SupportedKafka.java:45)

    To sum up, modifying broker.id requires attention:

    server.prorperties file;

    meta.properties file;

    (2) Copy and modify the configuration

    You can place the kafka installation directory on the other two nodes by scp copying. Note that you need to modify the corresponding values: broker.id and host.name, the others are the same.

    (3) Start the Kafka cluster

    Make sure to turn off the firewall before starting (see previous cluster configuration for details).

    On all three nodes, execute the following commands:

./bin/kafka-server-start.sh config/server.properties 

    If there is no accident, it can start successfully.

    (4) Create topic 

    On the slave01 node, execute the following command to create a topic:

bin/kafka-topics.sh --create --topic kafkatopictest --replication-factor 3 --partitions 2 --zookeeper slave01:2181

    Created successfully shows:

Created topic "kafkatopictest"

    (5) Send a message to kafka 

    Execute the following command on the slave02 node to send a message to kafka:

[hadoop@slave02 bin]$ ./kafka-console-producer.sh --broker-list slave02:9092 --sync --topic kafkatopictest
>hello world, 下班了,everyone!

    (6) Receive messages sent by kafka

    Execute the following command on the slave03 node to receive messages sent by kafka:

[hadoop@slave03 bin]$ ./kafka-console-consumer.sh --zookeeper slave01:2181 --topic kafkatopictest --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
hello world下班了,everyone!

    Well, the Kafka cluster construction and testing are now complete.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325443023&siteId=291194637