【kafka】zk+kafka集群部署

一、环境配置

IP地址                主机名          操作系统       服务名
192.168.168.68       zk-kafka-1     centos7.4     zookeeper、kafka
192.168.168.69       zk-kafka-2     centos7.4     zookeeper、kafka
192.168.168.70       zk-kafka-3     centos7.4     zookeeper、kafka

1、安装jdk

tar zxf /root/jdk1.8.0_101.tar.gz -C /data/
echo 'export PATH=$PATH:/data/jdk1.8.0_101/bin' >>/etc/profile
source /etc/profile

2、配置hosts

vim /etc/hosts
192.168.168.68 zk-kafka-1
192.168.168.69 zk-kafka-2
192.168.168.70 zk-kafka-3

二、安装zookeeper
1、解压源码包

cd /root/zookeeper-3.4.10/conf
mv zoo_sample.cfg zoo.cfg
cp -r zookeeper-3.4.10 /data/zookeeper

2、修改配置文件

#创建 data 文件夹 和 myid 文件
mkdir /data/zookeeper/data
mkdir /data/zookeeper/log
echo "1" >/data/zookeeper/data/myid

vim /data/zookeeper/conf/zoo.cfg
tickTime=2000
initLimit=15
syncLimit=5
maxClientCnxns=150
autopurge.snapRetainCount=50
autopurge.purgeInterval=24
dataDir=/data/zookeeper/data
dataLogDir=/data/zookeeper/log
clientPort=2181
server.1=zk-kafka-1:5000:5100
server.2=zk-kafka-2:5000:5100
server.3=zk-kafka-3:5000:5100

3、把zookeeper目录复制到其他两个节点

cd /data
scp -r zookeeper/ 192.168.168.69:/data/
scp -r zookeeper/ 192.168.168.70:/data/

4、修改myid

#zk-kafka-2
echo "2" >/data/zookeeper/data/myid
#zk-kafka-3
echo "3" >/data/zookeeper/data/myid

三、安装kafka
1、解压源码包

tar zxf kafka_2.11-0.11.0.2.tgz
cp -r kafka_2.11-0.11.0.2 /data/kafka

2、修改配置文件

cd /data/kafka/config/
#创建日志目录
mkdir /data/kafka/logs
vim server.properties

broker.id=0
listeners=PLAINTEXT://192.168.168.68:9092
host.name=192.168.168.68
num.network.threads=3
num.io.threads=4
queued.max.requests=1000
socket.send.buffer.bytes=10240
socket.receive.buffer.bytes=10240
socket.request.max.bytes=1048576
log.dirs=/data/kafka/logs
num.partitions=3
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=72
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.168.68:2181,192.168.168.69:2181,192.168.168.70:2181
zookeeper.connection.timeout.ms=100000
group.initial.rebalance.delay.ms=0

#listeners=PLAINTEXT://192.168.168.68:9092   配置ip,访问kafka不需要添加hosts

3、把kafaka目录复制到其他两个节点

scp -r kafka/ [email protected]:/data/
scp -r kafka/ [email protected]:/data/

4、修改服务器id

(1)修改每个节点对应的 server.properties 文件的 broker.id: 128为0,129为1,130为2 
(2)修改host.name和listeners的对应IP

5、启动集群

#启动时:先启动 zookeeper,后启动 kafka;关闭时:先关闭 kafka,后关闭zookeeper 
#分别在每个节点上启动 zookeeper
cd /data/zookeeper/bin
./zkServer.sh start

#查看状态
./zkServer.sh status

#启动 Kafaka 集群
cd /data/kafka/bin
./kafka-server-start.sh -daemon   ../config/server.properties

6、测试

#创建 topic 和 显示 topic 信息
#zk-kafka-1
cd /data/kafka/bin
./kafka-topics.sh --create --zookeeper zk-kafka-1:2181,zk-kafka-2:2181,zk-kafka-3:2181 --replication-factor 1 --partitions 3 --topic test

#显示topic信息
./kafka-topics.sh --describe --zookeeper zk-kafka-1:2181,zk-kafka-2:2181,zk-kafka-3:2181 --topic test

#列出topic
./kafka-topics.sh --list --zookeeper zk-kafka-1:2181,zk-kafka-2:2181,zk-kafka-3:2181

#创建 producer
./kafka-console-producer.sh --broker-list zk-kafka-1:9092,zk-kafka-2:9092,zk-kafka-3:9092 -topic test

#创建 consumer(分别在三台服务器上测试消费)
cd /data/kafaka/bin
./kafka-console-consumer.sh --zookeeper zk-kafka-1:2181,zk-kafka-2:2181,zk-kafka-3:2181 -topic test --from-beginning

在 producer 里输入消息,consumer 中就会显示出同样的内容,表示消费成功

おすすめ

転載: blog.csdn.net/qq_37837432/article/details/108580989