Linux combat - Kafka cluster installation and deployment

Kafka cluster installation and deployment

Introduction

Kafka is an 分布式的、去中心化的、高吞吐低延迟、订阅模式advanced message queuing system.

Like RabbitMQ, Kafka is also a message queue. However, RabbitMQ is mostly used in back-end systems because it focuses more on message delay and fault tolerance.

Kafka is mostly used in big data systems because it focuses more on data throughput.

Most of Kafka runs in a distributed (clustered) mode. Here, 3 servers will be used to complete the installation and deployment of the Kafka cluster.

Install

  1. Make sure the JDK and Zookeeper services are installed and deployed

    JDK configuration

    cluster preparation

    Zookeeper installation and deployment

    The operation of Kafka depends on the JDK environment and Zookeeper Please make sure that you already have the JDK environment and Zookeeper

  2. [Operation at node1] Download and upload the installation package of Kafka

    # 下载安装包
    wget http://archive.apache.org/dist/kafka/2.4.1/kafka_2.12-2.4.1.tgz
    
  3. [Operation at node1] Decompression

    mkdir -p /export/server			# 此文件夹如果不存在需先创建
    
    # 解压
    tar -zxvf kafka_2.12-2.4.1.tgz -C /export/server/
    
    # 创建软链接
    ln -s /export/server/kafka_2.12-2.4.1 /export/server/kafka
    
  4. server.properties[Operation at node1] Modify the files in the config directory in the Kafka directory

    cd /export/server/kafka/config
    
    vim server.properties 
    # 指定broker的id
    broker.id=1
    # 指定 kafka的绑定监听的地址
    listeners=PLAINTEXT://node1:9092
    # 指定Kafka数据的位置
    log.dirs=/export/server/kafka/data
    # 指定Zookeeper的三个节点
    zookeeper.connect=node1:2181,node2:2181,node3:2181
    
  5. [Operation at node1] Copy kafka of node1 to node2 and node3

    cd /export/server
    
    # 复制到node2同名文件夹
    scp -r kafka_2.12-2.4.1 node2:$PWD
    # 复制到node3同名文件夹
    scp -r kafka_2.12-2.4.1 node3:$PWD
    
  6. [Operation in node2]

    # 创建软链接
    ln -s /export/server/kafka_2.12-2.4.1 /export/server/kafka
    
    cd /export/server/kafka/config
    vim server.properties 
    # 指定broker的id
    broker.id=2
    # 指定 kafka的绑定监听的地址
    listeners=PLAINTEXT://node2:9092
    # 指定Kafka数据的位置
    log.dirs=/export/server/kafka/data
    # 指定Zookeeper的三个节点
    zookeeper.connect=node1:2181,node2:2181,node3:2181
    
  7. [Operation in node3]

    # 创建软链接
    ln -s /export/server/kafka_2.12-2.4.1 /export/server/kafka
    
    cd /export/server/kafka/config
    vim server.properties 
    # 指定broker的id
    broker.id=3
    # 指定 kafka的绑定监听的地址
    listeners=PLAINTEXT://node3:9092
    # 指定Kafka数据的位置
    log.dirs=/export/server/kafka/data
    # 指定Zookeeper的三个节点
    zookeeper.connect=node1:2181,node2:2181,node3:2181
    
  8. start kafka

    # 请先确保Zookeeper已经启动了
    
    # 方式1:【前台启动】分别在node1、2、3上执行如下语句
    /export/server/kafka/bin/kafka-server-start.sh /export/server/kafka/config/server.properties
    
    # 方式2:【后台启动】分别在node1、2、3上执行如下语句
    nohup /export/server/kafka/bin/kafka-server-start.sh /export/server/kafka/config/server.properties 2>&1 >> /export/server/kafka/kafka-server.log &
    
  9. Verify that Kafka is started (it only needs to have Kafka in the return)

    # 在每一台服务器执行
    jps 或  jps | grep Kafka
    

Test whether Kafka can be used normally

  1. Create a test subject
# 在node1执行,在消息队列中创建一个主题
/export/server/kafka_2.12-2.4.1/bin/kafka-topics.sh --create --zookeeper node1:2181 --replication-factor 1 --partitions 3 --topic test
  1. To run the test, please open the terminal pages of the two nodes in FinalShell
# 打开一个终端页面,启动一个模拟的数据生产者
/export/server/kafka_2.12-2.4.1/bin/kafka-console-producer.sh --broker-list node1:9092 --topic test
# 再打开一个新的终端页面,在启动一个模拟的数据消费者
/export/server/kafka_2.12-2.4.1/bin/kafka-console-consumer.sh --bootstrap-server node1:9092 --topic test --from-beginning
#在第一个终端输入内容,第二个终端就能收到

Guess you like

Origin blog.csdn.net/qq_41954181/article/details/129992608