【docker系列】三台阿里云服务器搭建zookeeper集群+kafka集群,并且测试

因为存在docker容器 跨主机网络通信,所以可以先看这一篇:

【docker系列】解决阿里云多主机服务器,docker overlay 跨主机网络通信环境

环境:
三台为:11.11.11.11 、11.11.11.22 、11.11.11.33

每台主机部署一个zookeeper节点,一个kafka节点,共三个zookeeper节点,三个kafka节点,

容器之间的网络采用overlay模式

一、创建 overlay 网络

# 创建 overlay 网卡,用于 集群服务的网卡设置(只需要在 master 节点上创建,从节点自然会获取)
 
docker network create --driver overlay  --subnet=15.0.0.0/24 --gateway=15.0.0.254 --attachable ccluster-overlay-elk

二、创建并运行容器

1、创建容器

[root@master conf]# sudo docker run -dit \
--net cluster-overlay-elk \
--ip 15.0.0.250 \
--restart=always \
--privileged=true \
--hostname=hadoop_zookeeper \
--name=hadoop-zookeeper-one \
-p 12181:2181 \
-v /usr/docker/software/zookeeper/data/:/data/  \
-v /usr/docker/software/zookeeper/datalog/:/datalog/  \
-v /usr/docker/software/zookeeper/logs/:/logs/  \
-v /usr/docker/software/zookeeper/conf/:/conf/  \
-v /usr/docker/software/zookeeper/bin/:/apache-zookeeper-3.5.6-bin/bin/  \
-v /etc/localtime:/etc/localtime \
-e TZ='Asia/Shanghai' \
-e LANG="en_US.UTF-8" \
zookeeper:3.5.6

2、更改配置文件 zoo.cfg

zoo.cfg 文件在 容器里面的目录为:/conf 因为我已经挂载到宿主机下的 /usr/docker/software/zookeeper/conf/ 。所以,直接进入宿主机目录,修改并重启容器:

[root@master conf]# clear
[root@master conf]# pwd
/usr/docker/software/zookeeper/conf
[root@master conf]# ll
total 8
-rw-r--r-- 1 mysql mysql 308 Jan  6 10:37 zoo.cfg
-rw-r--r-- 1 mysql mysql 146 Jan  6 11:05 zoo.cfg.dynamic.next
[root@master conf]# vim zoo.cfg

dataDir=/data
dataLogDir=/datalog
tickTime=2000
initLimit=5
syncLimit=2
autopurge.snapRetainCount=3
autopurge.purgeInterval=0
maxClientCnxns=60
standaloneEnabled=true
admin.enableServer=true
clientPort=2181
server.10=15.0.0.250:2888:3888
server.11=15.0.0.249:2888:3888
server.12=15.0.0.248:2888:3888
myid=10

说明:
clientPort: 表示 节点 端口
server
.10 、server.11、server.12 :表示 三台的集群通信服务的 ip 地址 和端口。,
myid: 表示该节点的 id [
并与server 中的 其中一个,一致]

注:
 每台机器都要配置,注意myid需要不同,myid文件在该镜像中/data 目录下,对应宿主机为  /usr/docker/software/zookeeper/data

[root@master data]# clear
[root@master data]# pwd
/usr/docker/software/zookeeper/data
[root@master data]# ll
total 8
-rw-r--r-- 1 mysql mysql    3 Jan  6 10:38 myid
drwxr-xr-x 2 mysql mysql 4096 Jan  6 11:10 version-2
[root@master data]# vim myid 

10
~                                                                                                                                                                                                                                                                             
~         

    3台 都配置好了之后,重启容器。

三、查看zookeeper 运行情况

[root@master data]# clear
#进入容器
[root@master data]# docker exec -it ba49a577b975 /bin/bash
root@hadoop_zookeeper:/apache-zookeeper-3.5.6-bin# ls
LICENSE.txt  NOTICE.txt  README.md  README_packaging.txt  bin  conf  docs  lib
#查看zookeeper运行情况
root@hadoop_zookeeper:/apache-zookeeper-3.5.6-bin# echo stat | nc 15.0.0.250 2181
stat is not executed because it is not in the whitelist.

错误信息:
stat is not executed because it is not in the whitelist.

解决方式:
进入宿主机的 挂载目录 bin

[root@master bin]# clear
[root@master bin]# pwd
/usr/docker/software/zookeeper/bin
[root@master bin]# ll
total 56
-rwxr-xr-x 1 root root  232 Oct  5 19:27 README.txt
-rwxr-xr-x 1 root root 2067 Oct  9 04:14 zkCleanup.sh
-rwxr-xr-x 1 root root 1154 Oct  9 04:14 zkCli.cmd
-rwxr-xr-x 1 root root 1621 Oct  9 04:14 zkCli.sh
-rwxr-xr-x 1 root root 1766 Oct  9 04:14 zkEnv.cmd
-rwxr-xr-x 1 root root 3690 Oct  5 19:27 zkEnv.sh
-rwxr-xr-x 1 root root 1286 Oct  5 19:27 zkServer.cmd
-rwxr-xr-x 1 root root 4573 Oct  9 04:14 zkServer-initialize.sh
-rwxr-xr-x 1 root root 9552 Jan  6 19:14 zkServer.sh
-rwxr-xr-x 1 root root  996 Oct  5 19:27 zkTxnLogToolkit.cmd
-rwxr-xr-x 1 root root 1385 Oct  5 19:27 zkTxnLogToolkit.sh
[root@master bin]# vim zkServer.sh 
......
......
    echo "ZooKeeper remote JMX Port set to $JMXPORT" >&2
    echo "ZooKeeper remote JMX authenticate set to $JMXAUTH" >&2
    echo "ZooKeeper remote JMX ssl set to $JMXSSL" >&2
    echo "ZooKeeper remote JMX log4j set to $JMXLOG4J" >&2
    ZOOMAIN="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=$JMXPORT -Dcom.sun.management.jmxremote.authenticate=$JMXAUTH -Dcom.sun.management.jmxremote.ssl=$JMXSSL -Dzookeeper.jmx.log4j.disable=$JMXLOG4J org.apache.zookeeper.server.quorum.QuorumPeerMain"
  fi
else
    echo "JMX disabled by user request" >&2
    ZOOMAIN="org.apache.zookeeper.server.quorum.QuorumPeerMain"
fi
#添加VM环境变量-Dzookeeper.4lw.commands.whitelist=*,便可以把所有指令添加到白名单
ZOOMAIN="-Dzookeeper.4lw.commands.whitelist=* ${ZOOMAIN}"

if [ "x$SERVER_JVMFLAGS" != "x" ]
then
    JVMFLAGS="$SERVER_JVMFLAGS $JVMFLAGS"
fi
........
........

#添加VM环境变量-Dzookeeper.4lw.commands.whitelist=*,便可以把所有指令添加到白名单
ZOOMAIN="-Dzookeeper.4lw.commands.whitelist=* ${ZOOMAIN}"

重启docker 容器,再次进入容器,查看zookeeper状态
 

[root@master data]# docker exec -it ba49a577b975 /bin/bash
root@hadoop_zookeeper:/apache-zookeeper-3.5.6-bin# echo stat | nc 15.0.0.250 2181
Zookeeper version: 3.5.6-c11b7e26bc554b8523dc929761dd28808913f091, built on 10/08/2019 20:18 GMT
Clients:
 /15.0.0.250:45670[0](queued=0,recved=1,sent=0)

Latency min/avg/max: 0/0/0
Received: 1
Sent: 0
Connections: 1
Outstanding: 0
Zxid: 0x30000052b
Mode: follower
Node count: 162
root@hadoop_zookeeper:/apache-zookeeper-3.5.6-bin# echo stat | nc 15.0.0.249 2181
Zookeeper version: 3.5.6-c11b7e26bc554b8523dc929761dd28808913f091, built on 10/08/2019 20:18 GMT
Clients:
 /15.0.0.250:45644[0](queued=0,recved=1,sent=0)

Latency min/avg/max: 0/0/1
Received: 460
Sent: 459
Connections: 4
Outstanding: 0
Zxid: 0x30000052b
Mode: follower
Node count: 162
root@hadoop_zookeeper:/apache-zookeeper-3.5.6-bin# echo stat | nc 15.0.0.248 2181
Zookeeper version: 3.5.6-c11b7e26bc554b8523dc929761dd28808913f091, built on 10/08/2019 20:18 GMT
Clients:
 /15.0.0.250:52766[0](queued=0,recved=1,sent=0)

Latency min/avg/max: 0/0/0
Received: 1
Sent: 0
Connections: 1
Outstanding: 0
Zxid: 0x400000000
Mode: leader
Node count: 162
Proposal sizes last/min/max: -1/-1/-1
root@hadoop_zookeeper:/apache-zookeeper-3.5.6-bin# 


可以看到zookeeper集群启动成功,并自动选举了lader

四、创建kafka集群

#启动kafka
sudo docker run -dit \
--net cluster-overlay-elk \
--ip 15.0.0.247 \
--restart=always \
--privileged=true \
--hostname=hadoop_kafka \
--name=hadoop-kafka-one \
-p 19092:9092 \
-v /usr/docker/software/kafka/config/:/opt/kafka/config/ \
-v /usr/docker/software/kafka/libs/:/opt/kafka/libs/ \
-v /usr/docker/software/kafka/logs/:/kafka/ \
-v /etc/localtime:/etc/localtime \
-e KAFKA_BROKER_ID=1 \
-e KAFKA_ZOOKEEPER_CONNECT=hadoop-zookeeper-one:2181,hadoop-zookeeper-two:2181,hadoop-zookeeper-three:2181 \
-e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://hadoop-kafka-one:9092 \
-e KAFKA_ADVERTISED_HOST_NAME=hadoop-kafka-one \
-e KAFKA_ADVERTISED_PORT=9092 \
-e TZ='Asia/Shanghai' \
-e LANG="en_US.UTF-8" \
wurstmeister/kafka:2.12-2.4.0

特别注意:(这里先留意,下面再说)
-e KAFKA_ZOOKEEPER_CONNECT = hadoop-zookeeper-one:2181,hadoop-zookeeper-two:2181,hadoop-zookeeper-three:2181

三台服务器,都是这样创建。(ip 地址 和 容器 name,必须不一样)

五、进入kafka容器,测试创建topic ,并发送消息


1、创建生产者

##进入容器
[root@master ~]# docker exec -it 5e15e5903ee0 /bin/bash
bash-4.4# cd /opt/kafka_2.12-2.4.0/bin/
##创建topic
##replication-factor 表示该topic需要保存几个副本(不能大于集群数量),它会均匀的分发到每个broker, partitions为几个分区,它也会均匀的分发到每个broker
bash-4.4# ./kafka-topics.sh --create --zookeeper hadoop-zookeeper-one:2181,hadoop-zookeeper-two:2181,hadoop-zookeeper-three:2181 --replication-factor 3 --partitions 3 --topic ttest-one
Created topic ttest-one.
#查看指定topic详情
bash-4.4# ./kafka-topics.sh --zookeeper hadoop-zookeeper-one:2181,hadoop-zookeeper-two:2181,hadoop-zookeeper-three:2181 --topic test-one --describe
Topic: test-one	PartitionCount: 3	ReplicationFactor: 3	Configs: 
	Topic: test-one	Partition: 0	Leader: 3	Replicas: 3,1,2	Isr: 3,1,2
	Topic: test-one	Partition: 1	Leader: 1	Replicas: 1,2,3	Isr: 1,2,3
	Topic: test-one	Partition: 2	Leader: 2	Replicas: 2,3,1	Isr: 2,3,1
#查看指定topic详情
bash-4.4# ./kafka-topics.sh --zookeeper hadoop-zookeeper-one:2181,hadoop-zookeeper-two:2181,hadoop-zookeeper-three:2181 --topic ttest-one --describe
Topic: ttest-one	PartitionCount: 3	ReplicationFactor: 3	Configs: 
	Topic: ttest-one	Partition: 0	Leader: 3	Replicas: 3,1,2	Isr: 3,1,2
	Topic: ttest-one	Partition: 1	Leader: 1	Replicas: 1,2,3	Isr: 1,2,3
	Topic: ttest-one	Partition: 2	Leader: 2	Replicas: 2,3,1	Isr: 2,3,1
#错误的创建生产者方式
bash-4.4# ./kafka-console-producer.sh --broker-list hadoop-zookeeper-one:2181,hadoop-zookeeper-two:2181,hadoop-zookeeper-three:2181 --topic ttest-one
>以上是错误的创建生产者的方式,因为链接的是zookeeper集群,而不是kafka集群。所以,当发送消息时,会显示无节点,并超时
[2020-01-07 09:13:56,238] ERROR Error when sending message to topic ttest-one with key: null, value: 164 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Topic ttest-one not present in metadata after 60000 ms.
>^Z
[1]+  Stopped                 ./kafka-console-producer.sh --broker-list hadoop-zookeeper-one:2181,hadoop-zookeeper-two:2181,hadoop-zookeeper-three:2181 --topic ttest-one
#正确的创建生产者方式
bash-4.4# ./kafka-console-producer.sh --broker-list hadoop-kafka-one:9092,hadoop-kafka-two:9092,hadoop-kafka-three:9092  --topic ttest-one
>现在是正确的创建生产者的方式,并且能成功发送消息,并被消费者消费
>die^H第二条信息
>第三条消息
>^Z
[2]+  Stopped                 ./kafka-console-producer.sh --broker-list hadoop-kafka-one:9092,hadoop-kafka-two:9092,hadoop-kafka-three:9092 --topic ttest-one
bash-4.4# 

注:创建 、查看 topic 信息时,连接的必须是zookeeper集群,该zookeeper集群,就是第四步中,创建kakfa集群中,配置的KAFKA_ZOOKEEPER_CONNECT属性,也就是第四步,让大家注意的地方

2、创建消费者

[root@slave1 ~]#  docker exec -it 7a09fcb86acf /bin/bash
bash-4.4# cd /opt/kafka_2.12-2.4.0/bin/
#创建消费者,同样需要连接 kafka集群,而不是zookeeper集群
bash-4.4# ./kafka-console-consumer.sh --bootstrap-server hadoop-kafka-one:9092,hadoop-kafka-two:9092,hadoop-kafka-three:9092 --topic ttest-one --from-beginning
现在是正确的创建生产者的方式,并且能成功发送消息,并被消费者消费
di第二条信息
第三条消息
^Z
[1]+  Stopped                 ./kafka-console-consumer.sh --bootstrap-server hadoop-kafka-one:9092,hadoop-kafka-two:9092,hadoop-kafka-three:9092 --topic ttest-one --from-beginning
bash-4.4# 

六、至此,zookeeper集群和kafka集群都已创建完毕

发布了111 篇原创文章 · 获赞 28 · 访问量 4万+

猜你喜欢

转载自blog.csdn.net/weixin_42697074/article/details/103862476