[Series] docker Ali cloud server to build three zookeeper cluster + kafka cluster and test

Because there docker container network traffic across hosts, so you can look at this one:

[Series] docker solve multi-Ali cloud host server, docker overlay network traffic across the host environment

Environment:
three as: 11.11.11.11, 11.11.11.22, 11.11.11.33

Each host node deployment of a zookeeper, a kafka node, a total of three zookeeper nodes, three nodes kafka,

Using the overlay network mode between the container

First, create overlay network

# 创建 overlay 网卡,用于 集群服务的网卡设置(只需要在 master 节点上创建,从节点自然会获取)
 
docker network create --driver overlay  --subnet=15.0.0.0/24 --gateway=15.0.0.254 --attachable ccluster-overlay-elk

Second, create and run a container

1. Create a container

[root@master conf]# sudo docker run -dit \
--net cluster-overlay-elk \
--ip 15.0.0.250 \
--restart=always \
--privileged=true \
--hostname=hadoop_zookeeper \
--name=hadoop-zookeeper-one \
-p 12181:2181 \
-v /usr/docker/software/zookeeper/data/:/data/  \
-v /usr/docker/software/zookeeper/datalog/:/datalog/  \
-v /usr/docker/software/zookeeper/logs/:/logs/  \
-v /usr/docker/software/zookeeper/conf/:/conf/  \
-v /usr/docker/software/zookeeper/bin/:/apache-zookeeper-3.5.6-bin/bin/  \
-v /etc/localtime:/etc/localtime \
-e TZ='Asia/Shanghai' \
-e LANG="en_US.UTF-8" \
zookeeper:3.5.6

2, change the configuration file zoo.cfg

zoo.cfg file inside the container directory: / conf because I had to mount the / usr / docker in the host / software / zookeeper / conf /. So, go directly to the host directory, modify and restart the container:

[root@master conf]# clear
[root@master conf]# pwd
/usr/docker/software/zookeeper/conf
[root@master conf]# ll
total 8
-rw-r--r-- 1 mysql mysql 308 Jan  6 10:37 zoo.cfg
-rw-r--r-- 1 mysql mysql 146 Jan  6 11:05 zoo.cfg.dynamic.next
[root@master conf]# vim zoo.cfg

dataDir=/data
dataLogDir=/datalog
tickTime=2000
initLimit=5
syncLimit=2
autopurge.snapRetainCount=3
autopurge.purgeInterval=0
maxClientCnxns=60
standaloneEnabled=true
admin.enableServer=true
clientPort=2181
server.10=15.0.0.250:2888:3888
server.11=15.0.0.249:2888:3888
server.12=15.0.0.248:2888:3888
myid=10

Description:
the clientPort: node port represents
Server
.10 , Server .11 , Server .12 : represents a group communication service in three ip address and port. ,
Myid: indicates the node id [
and a, which is consistent with the server ]

Note:
 each machine must be configured, needs to pay attention to different myid, myid image files under the / data directory, the corresponding host as   / usr / docker / software / zookeeper / data

[root@master data]# clear
[root@master data]# pwd
/usr/docker/software/zookeeper/data
[root@master data]# ll
total 8
-rw-r--r-- 1 mysql mysql    3 Jan  6 10:38 myid
drwxr-xr-x 2 mysql mysql 4096 Jan  6 11:10 version-2
[root@master data]# vim myid 

10
~                                                                                                                                                                                                                                                                             
~         

    Three are configured after the restart of the container.

Third, view the zookeeper operation

[root@master data]# clear
#进入容器
[root@master data]# docker exec -it ba49a577b975 /bin/bash
root@hadoop_zookeeper:/apache-zookeeper-3.5.6-bin# ls
LICENSE.txt  NOTICE.txt  README.md  README_packaging.txt  bin  conf  docs  lib
#查看zookeeper运行情况
root@hadoop_zookeeper:/apache-zookeeper-3.5.6-bin# echo stat | nc 15.0.0.250 2181
stat is not executed because it is not in the whitelist.

错误信息:
stat is not executed because it is not in the whitelist.

Solution:
enter the mount directory bin host of

[root@master bin]# clear
[root@master bin]# pwd
/usr/docker/software/zookeeper/bin
[root@master bin]# ll
total 56
-rwxr-xr-x 1 root root  232 Oct  5 19:27 README.txt
-rwxr-xr-x 1 root root 2067 Oct  9 04:14 zkCleanup.sh
-rwxr-xr-x 1 root root 1154 Oct  9 04:14 zkCli.cmd
-rwxr-xr-x 1 root root 1621 Oct  9 04:14 zkCli.sh
-rwxr-xr-x 1 root root 1766 Oct  9 04:14 zkEnv.cmd
-rwxr-xr-x 1 root root 3690 Oct  5 19:27 zkEnv.sh
-rwxr-xr-x 1 root root 1286 Oct  5 19:27 zkServer.cmd
-rwxr-xr-x 1 root root 4573 Oct  9 04:14 zkServer-initialize.sh
-rwxr-xr-x 1 root root 9552 Jan  6 19:14 zkServer.sh
-rwxr-xr-x 1 root root  996 Oct  5 19:27 zkTxnLogToolkit.cmd
-rwxr-xr-x 1 root root 1385 Oct  5 19:27 zkTxnLogToolkit.sh
[root@master bin]# vim zkServer.sh 
......
......
    echo "ZooKeeper remote JMX Port set to $JMXPORT" >&2
    echo "ZooKeeper remote JMX authenticate set to $JMXAUTH" >&2
    echo "ZooKeeper remote JMX ssl set to $JMXSSL" >&2
    echo "ZooKeeper remote JMX log4j set to $JMXLOG4J" >&2
    ZOOMAIN="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=$JMXPORT -Dcom.sun.management.jmxremote.authenticate=$JMXAUTH -Dcom.sun.management.jmxremote.ssl=$JMXSSL -Dzookeeper.jmx.log4j.disable=$JMXLOG4J org.apache.zookeeper.server.quorum.QuorumPeerMain"
  fi
else
    echo "JMX disabled by user request" >&2
    ZOOMAIN="org.apache.zookeeper.server.quorum.QuorumPeerMain"
fi
#添加VM环境变量-Dzookeeper.4lw.commands.whitelist=*,便可以把所有指令添加到白名单
ZOOMAIN="-Dzookeeper.4lw.commands.whitelist=* ${ZOOMAIN}"

if [ "x$SERVER_JVMFLAGS" != "x" ]
then
    JVMFLAGS="$SERVER_JVMFLAGS $JVMFLAGS"
fi
........
........

# Add VM environment variable -Dzookeeper.4lw.commands.whitelist = *, will be able to add all the instructions to the white list
ZOOMAIN = "- Dzookeeper.4lw.commands.whitelist = * $ {ZOOMAIN}"

Restart docker container, re-enter the vessel, view the status zookeeper
 

[root@master data]# docker exec -it ba49a577b975 /bin/bash
root@hadoop_zookeeper:/apache-zookeeper-3.5.6-bin# echo stat | nc 15.0.0.250 2181
Zookeeper version: 3.5.6-c11b7e26bc554b8523dc929761dd28808913f091, built on 10/08/2019 20:18 GMT
Clients:
 /15.0.0.250:45670[0](queued=0,recved=1,sent=0)

Latency min/avg/max: 0/0/0
Received: 1
Sent: 0
Connections: 1
Outstanding: 0
Zxid: 0x30000052b
Mode: follower
Node count: 162
root@hadoop_zookeeper:/apache-zookeeper-3.5.6-bin# echo stat | nc 15.0.0.249 2181
Zookeeper version: 3.5.6-c11b7e26bc554b8523dc929761dd28808913f091, built on 10/08/2019 20:18 GMT
Clients:
 /15.0.0.250:45644[0](queued=0,recved=1,sent=0)

Latency min/avg/max: 0/0/1
Received: 460
Sent: 459
Connections: 4
Outstanding: 0
Zxid: 0x30000052b
Mode: follower
Node count: 162
root@hadoop_zookeeper:/apache-zookeeper-3.5.6-bin# echo stat | nc 15.0.0.248 2181
Zookeeper version: 3.5.6-c11b7e26bc554b8523dc929761dd28808913f091, built on 10/08/2019 20:18 GMT
Clients:
 /15.0.0.250:52766[0](queued=0,recved=1,sent=0)

Latency min/avg/max: 0/0/0
Received: 1
Sent: 0
Connections: 1
Outstanding: 0
Zxid: 0x400000000
Mode: leader
Node count: 162
Proposal sizes last/min/max: -1/-1/-1
root@hadoop_zookeeper:/apache-zookeeper-3.5.6-bin# 


We can see zookeeper cluster started successfully and automatically elected lader

Fourth, create a cluster kafka

#启动kafka
sudo docker run -dit \
--net cluster-overlay-elk \
--ip 15.0.0.247 \
--restart=always \
--privileged=true \
--hostname=hadoop_kafka \
--name=hadoop-kafka-one \
-p 19092:9092 \
-v /usr/docker/software/kafka/config/:/opt/kafka/config/ \
-v /usr/docker/software/kafka/libs/:/opt/kafka/libs/ \
-v /usr/docker/software/kafka/logs/:/kafka/ \
-v /etc/localtime:/etc/localtime \
-e KAFKA_BROKER_ID=1 \
-e KAFKA_ZOOKEEPER_CONNECT=hadoop-zookeeper-one:2181,hadoop-zookeeper-two:2181,hadoop-zookeeper-three:2181 \
-e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://hadoop-kafka-one:9092 \
-e KAFKA_ADVERTISED_HOST_NAME=hadoop-kafka-one \
-e KAFKA_ADVERTISED_PORT=9092 \
-e TZ='Asia/Shanghai' \
-e LANG="en_US.UTF-8" \
wurstmeister/kafka:2.12-2.4.0

Special attention :( here to note that the following to say)
-e KAFKA_ZOOKEEPER_CONNECT  = hadoop-ZooKeeper-One: 2181, hadoop-ZooKeeper-TWO: 2181, hadoop-ZooKeeper-Three: 2181

Three servers are thus created. (Ip address and vessel name, must not the same)

 

Fifth, enter kafka containers, test creation topic, and sends a message


1. Create producer

##进入容器
[root@master ~]# docker exec -it 5e15e5903ee0 /bin/bash
bash-4.4# cd /opt/kafka_2.12-2.4.0/bin/
##创建topic
##replication-factor 表示该topic需要保存几个副本(不能大于集群数量),它会均匀的分发到每个broker, partitions为几个分区,它也会均匀的分发到每个broker
bash-4.4# ./kafka-topics.sh --create --zookeeper hadoop-zookeeper-one:2181,hadoop-zookeeper-two:2181,hadoop-zookeeper-three:2181 --replication-factor 3 --partitions 3 --topic ttest-one
Created topic ttest-one.
#查看指定topic详情
bash-4.4# ./kafka-topics.sh --zookeeper hadoop-zookeeper-one:2181,hadoop-zookeeper-two:2181,hadoop-zookeeper-three:2181 --topic test-one --describe
Topic: test-one	PartitionCount: 3	ReplicationFactor: 3	Configs: 
	Topic: test-one	Partition: 0	Leader: 3	Replicas: 3,1,2	Isr: 3,1,2
	Topic: test-one	Partition: 1	Leader: 1	Replicas: 1,2,3	Isr: 1,2,3
	Topic: test-one	Partition: 2	Leader: 2	Replicas: 2,3,1	Isr: 2,3,1
#查看指定topic详情
bash-4.4# ./kafka-topics.sh --zookeeper hadoop-zookeeper-one:2181,hadoop-zookeeper-two:2181,hadoop-zookeeper-three:2181 --topic ttest-one --describe
Topic: ttest-one	PartitionCount: 3	ReplicationFactor: 3	Configs: 
	Topic: ttest-one	Partition: 0	Leader: 3	Replicas: 3,1,2	Isr: 3,1,2
	Topic: ttest-one	Partition: 1	Leader: 1	Replicas: 1,2,3	Isr: 1,2,3
	Topic: ttest-one	Partition: 2	Leader: 2	Replicas: 2,3,1	Isr: 2,3,1
#错误的创建生产者方式
bash-4.4# ./kafka-console-producer.sh --broker-list hadoop-zookeeper-one:2181,hadoop-zookeeper-two:2181,hadoop-zookeeper-three:2181 --topic ttest-one
>以上是错误的创建生产者的方式,因为链接的是zookeeper集群,而不是kafka集群。所以,当发送消息时,会显示无节点,并超时
[2020-01-07 09:13:56,238] ERROR Error when sending message to topic ttest-one with key: null, value: 164 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Topic ttest-one not present in metadata after 60000 ms.
>^Z
[1]+  Stopped                 ./kafka-console-producer.sh --broker-list hadoop-zookeeper-one:2181,hadoop-zookeeper-two:2181,hadoop-zookeeper-three:2181 --topic ttest-one
#正确的创建生产者方式
bash-4.4# ./kafka-console-producer.sh --broker-list hadoop-kafka-one:9092,hadoop-kafka-two:9092,hadoop-kafka-three:9092  --topic ttest-one
>现在是正确的创建生产者的方式,并且能成功发送消息,并被消费者消费
>die^H第二条信息
>第三条消息
>^Z
[2]+  Stopped                 ./kafka-console-producer.sh --broker-list hadoop-kafka-one:9092,hadoop-kafka-two:9092,hadoop-kafka-three:9092 --topic ttest-one
bash-4.4# 

Note: Create, view the topic when the information must be connected zookeeper cluster, the zookeeper cluster, is the fourth step, create kakfa cluster configuration KAFKA_ZOOKEEPER_CONNECT property, which is the fourth step, so that we pay attention to the place

2, create a consumer

[root@slave1 ~]#  docker exec -it 7a09fcb86acf /bin/bash
bash-4.4# cd /opt/kafka_2.12-2.4.0/bin/
#创建消费者,同样需要连接 kafka集群,而不是zookeeper集群
bash-4.4# ./kafka-console-consumer.sh --bootstrap-server hadoop-kafka-one:9092,hadoop-kafka-two:9092,hadoop-kafka-three:9092 --topic ttest-one --from-beginning
现在是正确的创建生产者的方式,并且能成功发送消息,并被消费者消费
di第二条信息
第三条消息
^Z
[1]+  Stopped                 ./kafka-console-consumer.sh --bootstrap-server hadoop-kafka-one:9092,hadoop-kafka-two:9092,hadoop-kafka-three:9092 --topic ttest-one --from-beginning
bash-4.4# 

Six, so far, zookeeper clusters and clusters have been created kafka

发布了111 篇原创文章 · 获赞 28 · 访问量 4万+

Guess you like

Origin blog.csdn.net/weixin_42697074/article/details/103862476