Based docker environment to build (between three real machine) kafka kafka cluster environment to build a cluster based docker (Single Version)

Docker based environment, three physical host 192.168.0.27, 192.168.0.28, 192.168.0.29 , deploying a zookeeper each host node, a node kafka, among a total of three network nodes zookeeper, kafka three nodes, using the host vessel mode

1. Pull mirror

2. Start container

step1. zoo.cfg create a profile and replace the file in the container, it is possible to image a different position different from zoo.cfg

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/opt/zookeeper-3.4.13/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.27=192.168.0.27:2888:3888
server.28=192.168.0.28:2888:3888
server.29=192.168.0.29:2888:3888
myid=27

3. Each machine must be replaced, requiring different myid note, no mirror /opt/zookeeper-3.4.13/data myid the lower, it is necessary to add or mounted into the container by way of creating

 

4. Start zookeeper (each machine execution)

docker run -p 2181:2181 -p 2888:2888 -p 3888:3888 --name zookeeper27 --network host -v /images/zoo.cfg:/opt/zookeeper-3.4.13/conf/zoo.cfg -v /images/myid:/opt/zookeeper-3.4.13/data/myid -it wurstmeister/kafka:latest

5. Check the operation of the zookeeper

# Into the container 
Docker Exec - IT zookeeper27
 # operation View ZooKeeper 
echo STAT | nc 192.168.0.27

I can see zookeeper cluster started successfully and automatically elected lader

6. Start kafka

kafka cluster uses the same host mode

zks="192.168.0.27:2181,192.168.0.28:2181,192.168.0.29:2181"; docker run -p 9092:9092 --name kafka27 --network host -d -e KAFKA_BROKER_ID=27 -e KAFKA_ZOOKEEPER_CONNECT=${zks} -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://宿主机IP:9092 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 wurstmeister/kafka:latest

7.分别进入容器kafka27、kafka28、kafka29创建主题test27、test28、test29。下面以kafka27为例。

#创建topic
./kafka-topics.sh --create --zookeeper 192.168.0.27:2181,192.168.0.28:2181,192.168.0.29:2181 --replication-factor 3 --partitions 3 --topic test27
#replication-factor 表示该topic需要在不同的broker中保存几份, partitions为几个分区

#查看已经创建的topic
./kafka-topics.sh --list --zookeeper 192.168.0.27:2181,192.168.0.28:2181,192.168.0.29:2181 

#查看指定topic详情
./kafka-topics.sh --zookeeper 192.168.0.27:2181,192.168.0.28:2181,192.168.0.29:2181 --topic test27 --describe

#创建生产者
./kafka-console-producer.sh --broker-list 192.168.0.27:9092,192.168.0.28:9092,192.168.0.29:9092 --topic test、

#创建消费者
./kafka-console-consumer.sh --bootstrap-server 192.168.0.27:9092,192.168.0.28:9092,192.168.0.29:9092 --topic test --from-beginning

 

基于docker环境搭建kafka集群(单机版)

 

Guess you like

Origin www.cnblogs.com/answerThe/p/11290229.html