CentOs7 Kafka stand-alone message publish-subscribe

    During this period of time, I have been learning the knowledge related to big data, from Spark, Spark Streaming, Scala to Kafka, etc. There are many aspects of knowledge involved. Overall, I think big data is still very interesting, in all aspects now and in the future. Very applicable. Let's talk about the publish-subscribe of Kafka messages.

    (1) Basic environmental preparation

    I installed CentOs7 under the virtual machine, I will skip it here, the necessary environment jdk, zookeeper, kafka

    (2) Environment setup (if it is not root login, sudo is required for the following operations)

    1. JDK installation and environment configuration (preferably above jdk8, skip it here)

    2. Zookeeper installation and environment configuration

    (1) Unzip and move to another directory

#解压Zookeeper并重命名
sudo tar -zxvf zookeeper-3.3.6.tar.gz
sudo mv zookeeper-3.3.6 zookeeper
#将zookeeper移动到/usr/local/目录下,按自己喜好
sudo mv zookeeper /usr/local

    (2) Edit the Zookeeper configuration file

# 复制一份zoo_sample.cfg文件并改名为zoo.cfg
sudo cp /opt/zookeeper/zoo_sample.cfg zoo.cfg
# 编辑zoo.cfg 文件
sudo vim /opt/zookeeper/zoo.cfg
#主要修改dataDir和server.1=127.0.0.1:2888:3888这2处
# the directory where the snapshot is stored.
dataDir=/usr/local/zookeeper/data
# the port at which the clients will connect
clientPort=2181
server.1=127.0.0.1:2888:3888

     The parameters in the above configuration can be referred to: https://www.linuxidc.com/Linux/2017-06/144950.htm

    (3) Configure Zookeeper environment variables

sudo vim /etc/profile
#修改如下
JAVA_HOME=/usr/java/jdk1.8.0_161
JRE_HOME=/usr/java/jdk1.8.0_161/jre
SCALA_HOME=/usr/local/scala
ZOOKEEPER_HOME=/usr/local/zookeeper
KAFKA_HOME=/usr/local/kafka
PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin:$SCALA_HOME/bin:$ZOOKEEPER_HOME/bin:$KAFKA_HOME/bin
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib
export JAVA_HOME JRE_HOME SCALA_HOME ZOOKEEPER_HOME KAFKA_HOME PATH CLASSPATH

    Note that after the configuration is complete, you must execute: source /etc/profile, otherwise it will not take effect.

    (4) Start Zookeeper

#cd 到Zookeeper/bin目录下
./zkServer.sh start

    The startup is successful as follows:

ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

    Close Zookeeper as above, just modify the part:

#cd 到Zookeeper/bin目录下
./zkServer.sh stop

     3. Kafka installation and environment configuration

    (1) Unzip and move to another directory

# 解压及重命名为kafka
sudo tar -zxvf kafka_2.12-1.0.0.tgz
sudo mv kafka_2.12-1 kafka
# 移动至/usr/local/目录下
sudo mv kafka /usr/local

    (2) Edit Kafka's configuration file

#创建日志存放目录
cd /usr/local/kafka
mkdir logs
#修改配置文件/usr/local/kafka/config/server.properties
sudo vim /usr/local/kafka/config/server.properties
#主要修改下面几项内容如下:
broker.id=0
delete.topic.enable=true
listeners = PLAINTEXT://127.0.0.1:9092
log.dirs=/usr/local/kafka/logs/
zookeeper.connect=127.0.0.1:2181

    The parameters in the above configuration can be referred to: https://www.cnblogs.com/wangb0402/p/6187503.html

    (3) Configure Kafka environment variables: see the Zookeeper configuration above for details

    (4) Start Kafka

# cd到kafka/bin目录下
./kafka-server-start.sh /usr/local/kafka/config/server.properties 

    (3) Kafka's message publish-subscribe: open four terminals

    (1) Create a Topic

# kafka/bin目录下
./kafka-topics.sh --create --zookeeper 127.0.0.1:2181 --replication-factor 1 --partitions 1 --topic test

     The display is as follows:

[hadoop@bogon bin]$ ./kafka-topics.sh --create --zookeeper 127.0.0.1:2181 --replication-factor 1 --partitions 1 --topic test
Created topic "test".

    If the topic of the test already exists, you can delete it through delete:

#利用命令删除需要删除的topic(配置中已经设置为true)
./kafka-topics.sh --delete --zookeeper localhost:2181 --topic test

    (2) Producer sends a message to Topic

./kafka-console-producer.sh --broker-list 127.0.0.1:9092 --topic test

     The display is as follows:

[hadoop@bogon bin]$ ./kafka-console-producer.sh --broker-list 127.0.0.1:9092 --topic test
>producer send message
>hello kafka
>hello world
>spark
>heihei
>send everything for people
>

    (3) Consumer reads the message of Topic

./kafka-console-consumer.sh --zookeeper 127.0.0.1:2181 --topic test --from-beginning

    The display is as follows, note that the warning message at the beginning does not affect the use, please wait for a while:

[hadoop@bogon bin]$ ./kafka-console-consumer.sh --zookeeper 127.0.0.1:2181 -topic test --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
producer send message
hello kafka
hello world
spark
heihei
send everything for people

    

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325153634&siteId=291194637