Clúster de Kafka y CMAK

Modo de clúster
Planificación de nodos
Anfitrión ip Descripción
yfm01 192.168.199.101 Nodo 1 de Kafka
yfm02 192.168.199.102 Nodo 2 de Kafka
yfm03 192.168.199.103 Nodo 3 de Kafka
yfm04 192.168.199.104 nodo 1 del guardián del zoológico
yfm05 192.168.199.105 nodo 2 del guardián del zoológico
yfm06 192.168.199.106 nodo 3 del guardián del zoológico
CentOS Versión de CentOS Linux 7.9.2009 (Core)
获取压缩包&解压
mkdir -p /data/kafka && cd /data/kafka
wget https://downloads.apache.org/kafka/2.7.0/kafka_2.12-2.7.0.tgz
tar -zxvf kafka_2.12-2.7.0.tgz

修改配置文件
通用修改
sed -i s#log.dirs=/tmp/kafka-logs#log.dirs=/data/kafka/logs#g /data/kafka/kafka_2.12-2.7.0/config/server.properties

sed -i s#zookeeper.connect=localhost:2181#zookeeper.connect=192.168.199.104:2181,192.168.199.105:2181,192.168.199.106:2181#g /data/kafka/kafka_2.12-2.7.0/config/server.properties

分节点修改
sed -i s#broker.id=0#log.dirs=broker.id=0#g /data/kafka/kafka_2.12-2.7.0/config/server.properties

echo "listeners=PLAINTEXT://192.168.199.101:9092" >> /data/kafka/kafka_2.12-2.7.0/config/server.properties

后台启动kafka
cd /data/kafka/kafka_2.12-2.7.0
bin/kafka-server-start.sh -daemon config/server.properties

[2021-01-25 14:24:24,631] INFO Kafka version: 2.7.0 (org.apache.kafka.common.utils.AppInfoParser)
[2021-01-25 14:24:24,632] INFO Kafka commitId: 448719dc99a19793 (org.apache.kafka.common.utils.AppInfoParser)
[2021-01-25 14:24:24,632] INFO Kafka startTimeMs: 1611555864627 (org.apache.kafka.common.utils.AppInfoParser)
[2021-01-25 14:24:24,633] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
[2021-01-25 14:24:24,712] INFO [broker-0-to-controller-send-thread]: Recorded new controller, from now on will use broker 0 (kafka.server.BrokerToControllerRequestThread)
看到这些说明启动成功了

测试验证
1、创建Topic
在yfm01(Broker)上创建测试Tpoic:test-yfm01,这里我们指定了3个副本、1个分区
[root@yfm01 kafka_2.12-2.7.0]# bin/kafka-topics.sh --create --bootstrap-server 192.168.199.101:9092 --replication-factor 3 --partitions 1 --topic test-yfm01
2、查看Topic
[root@yfm02 kafka_2.12-2.7.0]# bin/kafka-topics.sh --list --bootstrap-server 192.168.199.102:9092
test-yfm01
3、发送消息
这里我们向Broker(id=0)的Topic=test-yfm01发送消息
[root@yfm01 kafka_2.12-2.7.0]# bin/kafka-console-producer.sh --broker-list  192.168.199.101:9092  --topic test-yfm01
>test by yfm
4、消费消息
在yfm02上消费Broker03的消息
bin/kafka-console-consumer.sh --bootstrap-server 192.168.199.103:9092 --topic test-yfm01 --from-beginning
在yfm03上消费Broker02的消息
bin/kafka-console-consumer.sh --bootstrap-server 192.168.199.102:9092 --topic test-yfm01 --from-beginning

均能收到消息,这是因为这两个消费消息的命令是建立了两个不同的Consumer
如果我们启动Consumer指定Consumer Group Id就可以作为一个消费组协同工,1个消息同时只会被一个Consumer消费到
bin/kafka-console-consumer.sh --bootstrap-server 192.168.199.103:9092 --topic test-yfm01 --from-beginning --group testgroup_ken
bin/kafka-console-consumer.sh --bootstrap-server 192.168.199.102:9092 --topic test-yfm01 --from-beginning --group testgroup_ken

关闭kafka
bin/kafka-server-stop.sh
Instalar cmak

Requerimientos de instalación:

  • Admite Kafka 0.8 y superior
  • Java 11+
  • Zookeeper debe ser de la versión 3.5+.
添加jdk11用户并授权
[root@yfm01 cmak-3.0.0.5]# adduser jdk11
[root@yfm01 cmak-3.0.0.5]# passwd jdk11
jdk11
授权sudo权限
chmod -v u+w /etc/sudoers
vim /etc/sudoers

## Allow root to run any commands anywhere 
root	ALL=(ALL) 	ALL
jdk11   ALL=(ALL)   ALL

使用jdk11账户登录
下载jdk11到本地/data/jdk11,我这里下载的是jdk-11.0.10_linux-x64_bin.tar.gz
tar -zxvf jdk-11.0.10_linux-x64_bin.tar.gz
vi ~/.bashrc
添加下面四行
export JAVA_HOME=/data/jdk11/jdk-11.0.10
export JRE_HOME=$JAVA_HOME/jre
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=./:$JAVA_HOME/lib:$JAVA_HOME/jre/lib

source ~/.bashrc

在192.168.199.101,yfm01上安装kafka可视化管理工具cmak

获取安装包
mkdir -p /data/cmak && cd /data/cmak
wget https://github.com/yahoo/CMAK/releases/download/3.0.0.5/cmak-3.0.0.5.zip

安装unzip
yum install -y unzip zip
unzip cmak-3.0.0.5.zip && cd cmak-3.0.0.5

修改配置文件
vi conf/application.conf
将下面的两个配置项配置成你实际的kafka集群对应的zookeeper地址。
kafka-manager.zkhosts="192.168.199.104:2181,192.168.199.105:2181,192.168.199.106:2181" 
cmak.zkhosts="192.168.199.104:2181,192.168.199.105:2181,192.168.199.106:2181" 

修改用户组
sudo chown -R jdk11:jdk11 cmak-3.0.0.5/

启动服务
nohup bin/cmak -Dconfig.file=conf/application.conf -Dhttp.port=9001 >kafka-manager.log 2>&1 &

cmak配置
新建cluster,即可

停止服务
kill pid

Inserte la descripción de la imagen aquí

referencia:

Guía de implementación del clúster de Kafka

Agregar nuevos usuarios y autorizar en CentOS 7

Linux Centos7 configura dos (múltiples) JDK de nivel de usuario de JDK

Supongo que te gusta

Origin blog.csdn.net/yfm081616/article/details/114213015
Recomendado
Clasificación