Docker build Zookeeper & Kafka cluster

In the recent study Kafka, preparing to test a cluster when the state feels both open three virtual machines or to open up three different port number in a virtual machine have too much trouble (ah .. mostly lazy).

Preparing the Environment

A computer with Internet access and there CentOS7 virtual machine computer

Why use a virtual machine? Because laptop use, so each time you connect IP network will change, but also always modify the configuration file, too cumbersome, inconvenient to test. (You can avoid this problem by way of a virtual network Docker, when the experiment did not know when)

Docker installation

If you have installed please ignore this step Docker

  1. Docker CentOS version supports the following:
  2. CentOS 7 (64-bit): The system as in claim 64, kernel version 3.10 or more.
  3. CentOS 6.5 (64-bit) or later: the requirements for the system 64, the system kernel version 2.6.32-431 or higher.
  4. CentOS kernel supports only the release of Docker.

yum install

Docker system requirements CentOS kernel version higher than 3.10, see prerequisites above to verify your CentOS version supports Docker.

# 查看内核版本
$ uname -a
#安装 Docker
$ yum -y install docker
#启动 Docker 后台服务
$ service docker start
# 由于本地没有hello-world这个镜像,所以会下载一个hello-world的镜像,并在容器内运行。
$ docker run hello-world

Script Installation

  1. Login Centos use sudo or root privileges.
  2. Ensure that the yum package up to date.
$ sudo yum update
  1. Docker obtain and execute the installation script.
$ curl -fsSL https://get.docker.com -o get-docker.sh
# 执行这个脚本会添加 docker.repo 源并安装 Docker。
$ sudo sh get-docker.sh

Start Docker

$ sudo systemctl start docker
# 验证 docker 是否安装成功并在容器中执行一个测试的镜像。
$ sudo docker run hello-world
$ docker ps

Mirror accelerate

Let me start domestic mirroring configuration when I was rejected, but after using found that download speeds duang~are about to go up. It is strongly recommended that you configure mirroring domestic sources.
Open / create /etc/docker/daemon.jsonfile, add the following:

{
  "registry-mirrors": ["http://hub-mirror.c.163.com"]
}

Zookeeper Cluster Setup

Zookeeper Mirror: zookeeper: 3.4

Mirroring ready

$ docker pull zookeeper:3.4

Find mirroring can go https://hub.docker.com/
Docker pull ImagesRF Royalty Free: // TAG representatives pulling TAGversion of imageMirror

Establish an independent container Zookeeper

We begin by creating an independent with the most simple way Zookeepernode, and then we create another node according to this example.

$ docker run --name zookeeper -p 2181:2181 -d zookeeper:3.4

By default, the container configuration file, /conf/zoo.cfgdata and log default directories /dataand /datalog, if required, can be mapped to a host in the directory.
Parameter Description

  1. --name: the name of the specified container
  2. -p: port number of the port assigned to the container exposed
  3. -d: print run in the background container and container ID

Cluster Setup

Other nodes in Zookeeperthe container to create a similar way to create separate containers, it is noted that the node to specify idand modify the configuration file, multi-node, create a corresponding command is as follows:

New docker network

$ docker network create zoo_kafka
$ docker network ls

Zookeeper container 1

$ docker run -d \
     --restart=always \
     -v /opt/docker/zookeeper/zoo1/data:/data \
     -v /opt/docker/zookeeper/zoo1/datalog:/datalog \
     -e ZOO_MY_ID=1 \
     -p 2181:2181 \
     -e ZOO_SERVERS="server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888" \
     --name=zoo1 \
     --net=viemall-zookeeper \
     --privileged \
     zookeeper:3.4

Zookeeper container 2

$ docker run -d \
     --restart=always \
     -v /opt/docker/zookeeper/zoo2/data:/data \
     -v /opt/docker/zookeeper/zoo2/datalog:/datalog \
     -e ZOO_MY_ID=2 \
     -p 2182:2181 \
     -e ZOO_SERVERS="server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888" \
     --name=zoo2 \
     --net=viemall-zookeeper \
     --privileged \
     zookeeper:3.4

Zookeeper container 3

$ docker run -d \
     --restart=always \
     -v /opt/docker/zookeeper/zoo3/data:/data \
     -v /opt/docker/zookeeper/zoo3/datalog:/datalog \
     -e ZOO_MY_ID=3 \
     -p 2183:2181 \
     -e ZOO_SERVERS="server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888" \
     --name=zoo3 \
     --net=viemall-zookeeper \
     --privileged \
     zookeeper:3.4

Although this method is also achieved what we wanted, but the steps are too cumbersome and troublesome to maintain them (lazy cancer late), so we use docker-composethe way to achieve.

docker-compose build zookeeper cluster

New docker network

$ docker network create viemall-zookeeper
$ docker network ls

Script writing docker-compose.yml

Use:

  1. installation docker-compose
# 获取脚本
$ curl -L https://github.com/docker/compose/releases/download/1.25.0-rc2/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
# 赋予执行权限
$chmod +x /usr/local/bin/docker-compose
  1. Any new directory docker-compose.ymlfile, copy the following
  2. Excuting an order docker-compose up -d

Command control
| command | interpretation |
| - | - |
| Docker Compose-up | Start all containers |
| Docker Compose-up -d | back up and running all containers |
| Docker Compose-up --no-the recreate -d | not re-create the container has stopped |
| Docker Compose-up -d test2 | test2 only start this container |
| Docker Compose-sTOP | stop container |
| Docker-Compose start | start container |
| Docker Compose-Down | stop and destroy container |

docker-compose.ymlDownload: https://github.com/JacianLiu/docker-compose/tree/master/zookeeper
docker-compose.ymldetails

version: '2'
services:
  zoo1:
    image: zookeeper:3.4 # 镜像名称
    restart: always # 当发生错误时自动重启
    hostname: zoo1
    container_name: zoo1
    privileged: true
    ports: # 端口
      - 2181:2181
    volumes: # 挂载数据卷
      - ./zoo1/data:/data
      - ./zoo1/datalog:/datalog 
    environment:
      TZ: Asia/Shanghai
      ZOO_MY_ID: 1 # 节点ID
      ZOO_PORT: 2181 # zookeeper端口号
      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888 # zookeeper节点列表
    networks:
      default:
        ipv4_address: 172.23.0.11

  zoo2:
    image: zookeeper:3.4
    restart: always
    hostname: zoo2
    container_name: zoo2
    privileged: true
    ports:
      - 2182:2181
    volumes:
      - ./zoo2/data:/data
      - ./zoo2/datalog:/datalog
    environment:
      TZ: Asia/Shanghai
      ZOO_MY_ID: 2
      ZOO_PORT: 2181
      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
    networks:
      default:
        ipv4_address: 172.23.0.12

  zoo3:
    image: zookeeper:3.4
    restart: always
    hostname: zoo3
    container_name: zoo3
    privileged: true
    ports:
      - 2183:2181
    volumes:
      - ./zoo3/data:/data
      - ./zoo3/datalog:/datalog
    environment:
      TZ: Asia/Shanghai
      ZOO_MY_ID: 3
      ZOO_PORT: 2181
      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
    networks:
      default:
        ipv4_address: 172.23.0.13

networks:
  default:
    external:
      name: zoo_kafka

verification

We can see from the figure, there is a Leader, two Flower, bringing our Zookeepercluster has built a good
Zookeeper

Kafka Cluster Setup

With the above basis, go and get Kafkaa cluster or a problem? In fact, several different variable values only.

With the top of the examples, we do not engage in strenuous single node Kafka, the direct use of docker-composemethods to deploy three nodes, in fact, much the same way, also when it comes to the top, in fact, a number of different attributes it; this time we do not need to go New Docker network, and set up directly in front of Zookeeperthe network can be created when a cluster!

Preparing the Environment

Kafka镜像: Wurstmeister / Kafka
Kafka-Manager镜像: sheepkiller / kafka-manager

# 不指定版本默认拉取最新版本的镜像
docker pull wurstmeister/kafka
docker pull sheepkiller/kafka-manager

Script writing docker-compose.yml

Use:

  1. installation docker-compose
# 获取脚本
$ curl -L https://github.com/docker/compose/releases/download/1.25.0-rc2/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
# 赋予执行权限
$chmod +x /usr/local/bin/docker-compose
  1. Any new directory docker-compose.ymlfile, copy the following
  2. Excuting an order docker-compose up -d

Command control
| command | interpretation |
| - | - | - |
| Docker Compose-up | Start all containers |
| Docker Compose-up -d | all containers back up and running |
| Docker Compose-up --no-the recreate - d | not re-create the container has stopped |
| Docker Compose-up -d test2 | test2 only start this container |
| Docker Compose-sTOP | stop container |
| Docker-Compose start | start container |
| Docker Compose-Down | stop and the destruction of container |

docker-compose.ymlDownload: https://github.com/JacianLiu/docker-compose/tree/master/zookeeper
docker-compose.ymldetails

version: '2'

services:
  broker1:
    image: wurstmeister/kafka
    restart: always
    hostname: broker1
    container_name: broker1
    privileged: true
    ports:
      - "9091:9092"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_LISTENERS: PLAINTEXT://broker1:9092
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker1:9092
      KAFKA_ADVERTISED_HOST_NAME: broker1
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_ZOOKEEPER_CONNECT: zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1
      JMX_PORT: 9988
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./broker1:/kafka/kafka\-logs\-broker1
    external_links:
    - zoo1
    - zoo2
    - zoo3
    networks:
      default:
        ipv4_address: 172.23.0.14

  broker2:
    image: wurstmeister/kafka
    restart: always
    hostname: broker2
    container_name: broker2
    privileged: true
    ports:
      - "9092:9092"
    environment:
      KAFKA_BROKER_ID: 2
      KAFKA_LISTENERS: PLAINTEXT://broker2:9092
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker2:9092
      KAFKA_ADVERTISED_HOST_NAME: broker2
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_ZOOKEEPER_CONNECT: zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1
      JMX_PORT: 9988
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./broker2:/kafka/kafka\-logs\-broker2
    external_links:  # 连接本compose文件以外的container
    - zoo1
    - zoo2
    - zoo3
    networks:
      default:
        ipv4_address: 172.23.0.15

  broker3:
    image: wurstmeister/kafka
    restart: always
    hostname: broker3
    container_name: broker3
    privileged: true
    ports:
      - "9093:9092"
    environment:
      KAFKA_BROKER_ID: 3
      KAFKA_LISTENERS: PLAINTEXT://broker3:9092
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker3:9092
      KAFKA_ADVERTISED_HOST_NAME: broker3
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_ZOOKEEPER_CONNECT: zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1
      JMX_PORT: 9988
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./broker3:/kafka/kafka\-logs\-broker3
    external_links:  # 连接本compose文件以外的container
    - zoo1
    - zoo2
    - zoo3
    networks:
      default:
        ipv4_address: 172.23.0.16

  kafka-manager:
    image: sheepkiller/kafka-manager:latest
    restart: always
    container_name: kafka-manager
    hostname: kafka-manager
    ports:
      - "9000:9000"
    links:            # 连接本compose文件创建的container
      - broker1
      - broker2
      - broker3
    external_links:   # 连接本compose文件以外的container
      - zoo1
      - zoo2
      - zoo3
    environment:
      ZK_HOSTS: zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1
      KAFKA_BROKERS: broker1:9092,broker2:9092,broker3:9092
      APPLICATION_SECRET: letmein
      KM_ARGS: -Djava.net.preferIPv4Stack=true
    networks:
      default:
        ipv4_address: 172.23.0.10

networks:
  default:
    external:   # 使用已创建的网络
      name: zoo_kafka

verification

We open kafka-managerthe administration page, the access path is host ip: 9000;
Kafka-Manager
if shown, fill in the Zookeepercluster address, to draw the lowermost Click save
Click cluster just added, you can see, there are three nodes in the cluster
Kafka-Cluster

Problems encountered in the process of building

  1. Mounting data volumes unlimited restart, see logTip: chown: changing ownership of '/ var / lib / mysql / ....': Permission denied
    Solution:
    • In --privileged added docker run = true to the container plus specific permissions
    • Temporarily turn off selinux: setenforce 0
    • Add selinux rule, change the security text directory you want to mount
  2. kafka-manager reported jmx-related errors,
    the solution:
    • In each node plus kafka environment variable port JMX_PORT =
    • After the discovery plus Rom, and network connectivity issues, then again each port jmx exposed, then fire-wall clearance, to solve the problem.
    • KAFKA_ADVERTISED_HOST_NAMEThis is preferably provided in the IP host, the host other than the code or to connect tools, also set back port exposed port.
[error] k.m.j.KafkaJMX$ - Failed to connect to service:jmx:rmi:///jndi/rmi://9.11.8.48:-1/jmxrmi java.lang.IllegalArgumentException: requirement failed: No jmx port but jmx polling enabled!
  1. View in a container topicTimes the following error (not just the topic of command, as if all will go wrong)
$ bin/kafka-topics.sh --list --zookeeper zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1
# 以下是错误
Error: Exception thrown by the agent : java.rmi.server.ExportException: Port already in use: 7203; nested exception is:
        java.net.BindException: Address already in use

Solution:
In the command with unset JMX_PORT;instructions, on top of transformation command is:

$ unset JMX_PORT;bin/kafka-topics.sh --list --zookeeper zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1

Annex: Docker instructions used

# 查看所有镜像
docker images
# 查看所有运行中的容器
docker ps
# 查看所有容器
docker ps -a
# 获取所有容器ip
$ docker inspect --format='{{.Name}} - {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -aq)
# 查看容器内部日志
$ docker logs -f <容器ID>
# 进入容器内部
$ docker exec -it <容器ID> /bin/basj
# 创建容器 -d代表后台启动
docker run --name <容器名称> -e <参数> -v <挂载数据卷> <容器ID>
# 重启容器
docker restart <容器ID>
# 关闭容器
docker stop <容器id>
# 运行容器
docker start <容器id>

Guess you like

Origin www.cnblogs.com/Jacian/p/11421114.html