Detailed notes from Docker-Compose entry to actual combat

This article was first published from "MOOC" (www.imooc.com). If you want to know more about IT dry goods and hot news in the programmer circle, please follow "MOOC" or the official account of MOOC!

Author: Mu Xian | MOOC Lecturer


Friends who have used Docker know that when starting Docker, there are usually many startup parameters, such as -v specifies the mount directory, -p specifies the port, and so on. In addition, many times in our business system, there are usually several Docker combinations running, and there are clear requirements for network communication between containers, container startup sequence, etc. Based on these problems, Docker-Compose technology was born.

This article will go from basics to actual combat examples, with a total of 7 parts. The first 5 parts explain the basics, and the latter 2 parts focus on actual combat, explaining the use of Docker-Compose in detail. The main contents are as follows:

  1. Connection between Docker-Compose and Docker
  2. Install Docker-Compose
  3. An overview of YAML syntax
  4. Summary of Docker-Compose syntax
  5. Docker-Compose common commands
  6. Practical exercise 1: Docker-Compose deploys pseudo-distributed Elasticsearch
  7. Practical exercise 2: Docker-Compose deploys Kafka (Zookeeper)

If you have a better foundation, you can jump directly to Part 6 to start learning the actual combat link, and come back to check the basics if you have any questions.
It takes about 30 minutes to read and practice the content of this article.

Connection between Docker-Compose and Docker

I believe you have encountered the following two scenarios when using Docker:

  1. Docker starts to specify parameters, such as -p specifies the port, -v mounts the directory, etc. When starting the next time, because there are too many parameters, the startup parameters are forgotten.
  2. There are dependencies when multiple Docker containers are started, and the order of startup is not determined.

The emergence of Docker-Compose is to solve the above two problems, by writing yaml files to define Docker startup parameters and orchestrate containers.

Install Docker-Compose

Docker-Compose supports the installation of current mainstream platforms, as shown in the figure:

Reference link: Overview | Docker Documentation

Friends, you can install the corresponding version according to your own situation. I will use the Ubuntu 16.04 environment as a demonstration here, or you can follow the installation steps described on the official website, or you can just use a single command (of course, the premise is that Docker has been installed):

apt install docker-compose

docker-compose --versionCheck the version after installation

Every time you enter the docker-compose command, it is really long. Here I will teach you how to create an alias for docker-compose. The operation is as follows:

vi ~/.bashrc
alias dc='docker-compose'
source ~/.bashrc

The principle of the above operation is to add an alias for a command in the environment variable of the system. After executing the above three commands, docker-compose can be called using dc, as shown in the figure:

An overview of YAML syntax

Docker-Compose container orchestration mainly utilizes YAML syntax. Learning YAML syntax can be learned by analogy with JSON. For example, there are objects, arrays, etc. in JSON, which can be expressed by using YAML syntax. For detailed content, it is recommended to read "YAML Language Tutorial" by Ruan Yifeng.

Summary of Docker-Compose syntax

After the previous foreshadowing is laid, we will start to learn the Docker-Compose syntax in this chapter. If you are familiar with Docker syntax, you will learn this chapter quickly. The main purpose of Docker-Compose is to arrange containers. YAML syntax is used to arrange containers, and the content of the arrangement files is written in the docker-compose.yml file. Next, we will explain the common syntax of Docker-Compose in order. For some of the uncommon ones, you can go to the official website to inquire according to your actual needs. The operations supported by Docker are definitely supported by Docker-Compose.

version

The first field of each docker-compose.yml file is version. The version field indicates which version of Compose to use. Compose has the following versions. The latest version is 3.7. In addition, there are 1, 2, 2.x, 3 .x, different versions of compose support different Docker versions.

Correspondence between compose and Docker versions:

In addition to the Compose file format versions shown in the table, Compose itself is also on a release schedule, as indicated in Compose releases, but file format versions do not necessarily increase with each release. For example, Compose file format 3.0 was first introduced in Compose version 1.10.0 and gradually versioned in subsequent releases.

The grammatical structure is as follows:

version: "2"
version: "3.7"
#可以简写为
version: "3"

image

services:
  web:
    image: hello-world

The second-level label under the services label is web. This name is customized by the user, and it is the service name.
image is the image name or image ID of the specified service. If the image doesn't exist locally, Compose will try to pull it.

build

In addition to the specified image, the service can also be based on a Dockerfile, and execute the build task when using up to start. The build tag is build, which can specify the path of the folder where the Dockerfile is located. Compose will use it to automatically build this image, and then use this image to start the service container.

build: /path/to/build/dir

It can also be a relative path, as long as the context is determined, the Dockerfile can be read

build: ./dir

Set the context root directory, and then specify the Dockerfile based on this directory

build:
  context: ../
  dockerfile: path/of/Dockerfile

Note that build is a directory. If you want to specify a Dockerfile file, you need to use the dockerfile label in the sub-level label of the build label, as in the above example.
If you specify both image and build tags, then Compose will build the image and name the image the name after image.

build: ./dir
image: webapp:tag

Since the build task can be defined in docker-compose.yml, the arg tag must be indispensable, just like the ARG instruction in the Dockerfile, which can specify environment variables during the build process, but cancel after the build is successful, in docker-compose. This way of writing is also supported in the yml file:

build:
  context: .
  args:
    buildno: 1
    password: secret

The following writing method is also supported. Generally speaking, the following writing method is more suitable for reading.

build:
  context: .
  args:
    - buildno=1
    - password=secret

Unlike ENV, ARGs are nullable so that the build process can assign values ​​to them. . For example:

args:
  - buildno
  - password

Note: YAML boolean values ​​(true, false, yes, no, on, off) must be enclosed in quotation marks (single quotation marks and double quotation marks are acceptable), otherwise they will be parsed as strings.

command

Use command to override the command executed by default after the container starts.

command: echo 'hello world'

It can also be written in a format similar to that in Dockerfile:

command: ['echo','hello world']

container_name

As mentioned earlier, the container name format of Compose is: <project name> <service name> <serial number>
Although you can customize the project name and service name, if you want to fully control the naming of the container, you can use this tag to specify:

container_name: app

In this way, the name of the container is specified as app.

depends_on

When using Compose, the biggest advantage is to type fewer startup commands, but the order in which the general project containers are started is required. If you start the container directly from top to bottom, it will inevitably fail to start due to container dependencies.
For example, if the application container is started before the database container is started, the application container will exit because the database cannot be found. In order to avoid this situation, we need to add a label, which is depends_on. This label solves the container's dependency and startup sequence The problem.
For example, the following container will first start the redis and db services, and finally start the web service:

version: '2'
services:
  web:
    build: .
    depends_on:
      - db
      - redis
  redis:
    image: redis
  db:
    image: postgres

Note that, by default, when using docker-compose up web to start the web service, the redis and db services will also be started, because the dependencies are defined in the configuration file.

dns

Same purpose as the --dns parameter, the format is as follows:

dns: 8.8.8.8

Can also be a list:

dns:
  - 8.8.8.8
  - 9.9.9.9

In addition, the configuration of dns_search is also similar:

dns_search: example.com
dns_search:
  - dc1.example.com
  - dc2.example.com

tmpfs

Mount the temporary directory inside the container, which has the same effect as the parameters of run:

tmpfs: /run
tmpfs:
  - /run
  - /tmp

entrypoint

There is an instruction in the Dockerfile called ENTRYPOINT instruction, which is used to specify the access point. ENTRYPOINT configures the execution command when the container starts. If there are multiple ENTRYPOINT instructions in the Dockerfile, only the last one takes effect. (will not be ignored and will always be executed, even if other commands are specified docker runat ).
Access points can be defined in docker-compose.yml, overriding the definition in Dockerfile:

entrypoint: /code/entrypoint.sh

env_file

Remember the .env file mentioned earlier, this file can set the variables of Compose. In docker-compose.yml, you can define a file dedicated to storing variables.
If a configuration file is specified by docker-compose -f FILE, the path in env_file will use the configuration file path.

If a variable name conflicts with an environment directive, the latter takes precedence. The format is as follows:

env_file: .env

Or set multiple according to docker-compose.yml:

env_file:
  - ./common.env
  - ./apps/web.env
  - /opt/secrets.env

Note that the environment variables mentioned here are for the Compose of the host machine. If there is a build operation in the configuration file, these variables will not enter the build process. If you want to use variables in the build, it is the first choice. The arg label.

environment

It is completely different from the above env_file tag, but somewhat similar to arg. The function of this tag is to set mirror variables, which can save variables in the mirror, which means that the started container will also contain these variable settings. This is the same as arg The biggest difference.
Generally, the variables of the arg tag are only used during the build process. The environment and the ENV command in the Dockerfile will always save variables in the image and container, similar to the effect of docker run -e.

environment:
  RACK_ENV: development
  SHOW: 'true'
  SESSION_SECRET:
 
environment:
  - RACK_ENV=development
  - SHOW=true
  - SESSION_SECRET

expose

This label is the same as the EXPOSE instruction in the Dockerfile, which is used to specify the exposed port, but it is only used as a reference. In fact, the port mapping of docker-compose.yml also needs a label such as ports.

expose:
 - "3000"
 - "8000"

external_links

In the process of using Docker, we will have many containers that are started separately using docker run. In order to enable Compose to connect to these containers that are not defined in docker-compose.yml, we need a special label, which is external_links, which allows Compose projects The containers inside are connected to those containers outside the project configuration (provided that at least one container in the external container is connected to the same network as the service in the project).
The format is as follows:

external_links:
 - redis_1
 - project_db_1:mysql
 - project_db_1:postgresql

extra_hosts

Adding the label of the host name is to add some records to the /etc/hosts file, which is similar to –add-host of the Docker client:

extra_hosts:
 - "somehost:162.242.195.82"
 - "otherhost:50.31.209.229"

Check the hosts inside the container after startup:

162.242.195.82  somehost
50.31.209.229   otherhost

labels

Adding metadata to the container has the same meaning as the LABEL instruction of the Dockerfile, which is similar to code comments. This is to make a statement to Compose. The format is as follows:

labels:
  com.example.description: "Accounting webapp"
  com.example.department: "Finance"
  com.example.label-with-empty-value: ""
labels:
  - "com.example.description=Accounting webapp"
  - "com.example.department=Finance"
  - "com.example.label-with-empty-value"

links

The parameter depends_on mentioned above solves the problem of startup sequence. This label solves the problem of container connection. It has the same effect as the Docker client’s –link, and it will connect to containers in other services.

links:
 - db
 - db:database
 - redis

ports

The label of the mapped port.
Use the format HOST:CONTAINER or just specify the port of the container, and the host will randomly map the port.

ports:
 - "3000"
 - "8000:8000"
 - "49100:22"
 - "127.0.0.1:8001:8001"

Note: When using the HOST:CONTAINER format to map ports, you may get wrong results if you use a container port less than 60, because YAML will parse the number format xx:yy as base 60. So it is recommended to use string format.

volumes

To mount a directory or an existing data volume container, you can directly use the format [HOST:CONTAINER], or use the format [HOST:CONTAINER:ro], the latter is read-only for the container Yes, this can effectively protect the host's file system.
Compose's data volume specified path can be a relative path, use . or ... to specify a relative directory.
The format of the data volume can be in the following forms:

volumes:
  # 只是指定一个路径,Docker 会自动在创建一个数据卷(这个路径是容器内部的)。
  - /var/lib/mysql
 
  # 使用绝对路径挂载数据卷
  - /opt/data:/var/lib/mysql
 
  # 以 Compose 配置文件为中心的相对路径作为数据卷挂载到容器。
  - ./cache:/tmp/cache
 
  # 使用用户的相对路径(~/ 表示的目录是 /home/<用户目录>/ 或者 /root/)。
  - ~/configs:/etc/configs/:ro
 
  # 已经存在的命名的数据卷。
  - datavolume:/var/lib/mysql

network_mode

The network mode is similar to the –net parameter of the Docker client, except that there is a relatively more service:[service name] format.
For example:

network_mode: "bridge"
network_mode: "host"
network_mode: "none"
network_mode: "service:[service name]"
network_mode: "container:[container name/id]"

You can specify the network that uses the service or container. These network modes are very different. If you have time, you can find out. The default is bridge, which is the bridge mode.

networks

Join the specified network, the format is as follows:

services:
  some-service:
    networks:
     - some-network
     - other-network

There is also a special sub-tag aliases about this tag, which is a tag used to set service aliases, for example:

services:
  some-service:
    networks:
      some-network:
        aliases:
         - alias1
         - alias3
      other-network:
        aliases:
         - alias2

The same service can have different aliases on different networks.

Docker-Compose common commands

In this chapter, we will continue to learn about the execution commands of Docker-Compose. Generally speaking, Docker-Compose is very similar to Docker commands. We can docker-compose -hcheck

Let me explain the commonly used ones in turn, and give appropriate examples to assist understanding.

docker-compose up -d nginx	构建建启动nignx容器
docker-compose exec nginx bash	登录到nginx容器中
docker-compose down	删除所有nginx容器,镜像
docker-compose ps	显示所有容器
docker-compose restart nginx	重新启动nginx容器
docker-compose run --no-deps --rm php-fpm php -v	在php-fpm中不启动关联容器,并容器执行php -v 执行完成后删除容器
docker-compose build nginx	构建镜像
docker-compose build --no-cache nginx	不带缓存的构建
docker-compose logs  nginx	查看nginx的日志
docker-compose logs -f nginx	查看nginx的实时日志
docker-compose config  -q	验证(docker-compose.yml)文件配置,当配置正确时,不输出任何内容,当文件配置错误,输出错误信息。
docker-compose events --json nginx	以json的形式输出nginx的docker日志
docker-compose pause nginx	暂停nignx容器
docker-compose unpause nginx	恢复ningx容器
docker-compose rm nginx	删除容器(删除前必须关闭容器)
docker-compose stop nginx	停止nignx容器
docker-compose start nginx	启动nignx容

We have finished learning the basic environment here, and I will start to enter the exciting actual combat link. After learning the basic chapters, you can also just read the actual combat titles, try to write them yourself, and then compare them with mine.

Practical exercise 1: Docker-Compose deploys pseudo-distributed Elasticsearch

Elasticsearch is an open source, highly scalable distributed full-text search engine that can store and retrieve data in near real time. Moreover, the scalability is very good. It can be extended to hundreds of servers if conditions permit, and can process PB-level data. It has been widely used in business systems and big data projects. The purpose of building through Docker-Compose is that we can quickly build a pseudo-distributed Elasticsearch cluster. The purpose of this is to facilitate learning and to be widely used in the production environment of small systems, so as not to waste time building the environment.

Elastisearch official website Docker-Compose installation instructions: Install Elasticsearch with Docker | Elasticsearch Guide [8.7] | Elastic

The following docker-compose file is recommended by the official website of Elasticsearch. In order to run more conveniently, I downloaded the Elasticsearch image to the Alibaba Cloud image in advance. Detailed explanations are indicated in the documentation as comments.

version: '2'
services:
  # 服务名称,这里启动了三个,分别是es01,es02,es03
  es01:
   #将官方地址替换为了我的阿里云地址,这样下载速度会更快
    image: registry.cn-hangzhou.aliyuncs.com/chand/elasticsearch:7.10.1
    #指定容器的名称
    container_name: es01
    environment:
      # 配置es相关的环境变量
      - node.name=es01
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es02,es03
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
     #ulimit设置限制的时候会设置两条线soft和hard线,当资源到达了soft线那么只是告警,如果达到了hard线那么内核就强制限制了。
    ulimits:
      memlock:
        soft: -1
        hard: -1
     #数据挂载到本地目录
    volumes:
      - data01:/usr/share/elasticsearch/data
     #指定暴露的端口
    ports:
      - 9200:9200
     #指定网路
    networks:
      - elastic
  #es02、es03与上述的es01配置几乎一致,后面就不赘述了
  es02:
    image: registry.cn-hangzhou.aliyuncs.com/chand/elasticsearch:7.10.1
    container_name: es02
    environment:
      - node.name=es02
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es01,es03
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data02:/usr/share/elasticsearch/data
    
    networks:
      - elastic
  es03:
    image: registry.cn-hangzhou.aliyuncs.com/chand/elasticsearch:7.10.1
    container_name: es03
    environment:
      - node.name=es03
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es01,es02
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data03:/usr/share/elasticsearch/data
    networks:
      - elastic
volumes:
  data01:
    driver: local
  data02:
    driver: local
  data03:
    driver: local
networks:
  elastic:
    driver: bridge

After configuring the above commands, let's execute the command docker-compose up -dto start Compose and docker-compose pscheck the startup status, as shown in the figure:

ip:9200/_cluster/health?prettyCheck the browser to see if the cluster is started successfully. My local ip is 192.168.10.107, and the access address is: http://192.168.10.107:9200/_cluster/health?pretty. The result is shown in the figure:

It can be seen that the cluster has been successfully started, and our first actual combat demonstration has been completed. When I am running here, due to insufficient memory, the exception information is as follows:

[max virtual memory areas vm.max_map_count [65530\] is too low, increase to at least [262144]](https://www.cnblogs.com/yidiandhappy/p/7714489.html)

solution:

sysctl -w vm.max_map_count=262144

In order to prevent invalidation after restarting, modify the virtual machine configuration file vi /etc/sysctl.confand add the following content to the last line:

vm.max_map_count=262144

Practical exercise 2: Docker-Compose deploys Kafka (Zookeeper)

Then we came to the second practical demonstration, which is also a scene that friends will encounter in daily work, building Kafka. It is a little more troublesome to build Kafka because Kafka is highly dependent on Zookeeper. Let's take a look at how to solve it through Docker-Compose, and the detailed description is expressed in the form of annotations in the file.

version: '2'
services:
  zookeeper:
    #拉取 zookeeper镜像,更换为阿里云镜像源,提升拉取速度
    image: registry.cn-hangzhou.aliyuncs.com/chand/zookeeper
    #暴露映射的端口
    ports:
      - "2181:2181"
    #指定zookeeper的容器名称
    container_name: "zookeeper"
    #自动重启,即使主机重启也会重启容器
    restart: always
  kafka:
    #拉取 kafka 镜像,更换为阿里云镜像源,提升拉取速度
    image: registry.cn-hangzhou.aliyuncs.com/chand/kafka:2.12-2.3.0
    #指定kafka容器名称
    container_name: "kafka"
    #指定端口
    ports:
      - "9092:9092"
    #设置相关kafka的配置
    environment:
      - TZ=CST-8
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
      # 非必须,设置自动创建 topic
      - KAFKA_AUTO_CREATE_TOPICS_ENABLE=true
      # 修为自己本机实际的IP地址
      - KAFKA_ADVERTISED_HOST_NAME=192.168.10.107
      - KAFKA_ADVERTISED_PORT=9092
      # 修改为自己实际的IP地址
      - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.10.107
      - KAFKA_LISTENERS=PLAINTEXT://:9092
      # 非必须,设置对内存
      - KAFKA_HEAP_OPTS=-Xmx1G -Xms1G
      # 非必须,设置保存7天数据,为默认值
      - KAFKA_LOG_RETENTION_HOURS=1
    volumes:
      # 将 kafka 的数据文件映射出来
      - /home/kafka:/kafka
      - /var/run/docker.sock:/var/run/docker.sock
     #自动重启,即使主机重启也会重启容器
    restart: always

Execute the command dc up -dto start the service, dc psand check the startup status of the service.

After the startup is complete, let's verify whether kafka is successfully deployed. Here I call the kafka interface through Java to demonstrate creating topics and traversing topics to test whether kafka is connected.

AdminClientTest

package cn.czyfwpla.kafka.examples.admin;

import java.util.concurrent.ExecutionException;

/**
 * @author mxq
 * 调用kafka admin api 代码示例
 */
public class AdminClientTest {

    public static void main(String[] args) throws ExecutionException, InterruptedException {
        AdminExampleApi adminExampleApi = new AdminExampleApi();
        adminExampleApi.listTopics();
        adminExampleApi.createTopic("mxq-test");
        adminExampleApi.listTopics();
    }

}

AdminExampleApi

package cn.czyfwpla.kafka.examples.admin;

import cn.czyfwpla.kafka.examples.KafkaConstant;
import org.apache.kafka.clients.admin.*;
import org.apache.kafka.common.KafkaFuture;

import java.util.*;
import java.util.concurrent.ExecutionException;

/**
 * @author mxq
 */
public class AdminExampleApi {


    //创建主题
    public void createTopic(String topicName) throws ExecutionException, InterruptedException {
        Properties properties = new Properties();
        properties.setProperty(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, KafkaConstant.KAFKA_HOST);


        AdminClient adminClient = AdminClient.create(properties);

        //副本数量
        Short replaceNum = 1;


        NewTopic newTopic = new NewTopic(topicName, 1, replaceNum);


        Collection collection = new ArrayList();
        collection.add(newTopic);
        CreateTopicsResult topics = adminClient.createTopics(collection);
        KafkaFuture<Void> all = topics.all();
        all.get();
    }

    //遍历主题
    public void listTopics() throws ExecutionException, InterruptedException {
        Properties properties = new Properties();
        properties.setProperty(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, KafkaConstant.KAFKA_HOST);

        AdminClient adminClient = AdminClient.create(properties);

        ListTopicsResult listTopicsResult = adminClient.listTopics();

        KafkaFuture<Set<String>> names = listTopicsResult.names();

        Set<String> stringSet = names.get();

        Iterator<String> iterator = stringSet.iterator();

        //遍历主题的名称
        while (iterator.hasNext()){
            System.out.println(iterator.next());
        }

        KafkaFuture<Collection<TopicListing>> listings = listTopicsResult.listings();
        Collection<TopicListing> topicListings = listings.get();
        //遍历主题、消息类型
        for (TopicListing topicListing : topicListings) {
            System.out.println(topicListing.toString());
        }
    }

}

result:

Summarize

It’s time to end the article here. Let’s make a summary of the whole article. Why do many authors have written Docker-Compose, so I’m here to write it again. The reason is that the current Docker-Compose articles on the market mainly explain the theory. If you don’t practice it yourself when you are studying, I think what you have learned is only in theory, and it is difficult to help you in your work. In this article, I explain the theory as the basis in the front, and the actual combat in the back. The first two practical battles greatly improve the efficiency when we learn or build a small production environment. The third practical battle is based on the actual project release as the background, and the combination of four containers is released, so that small partners can intuitively feel what Docker-Compose gives us. The convenience brought. The actual combat environment is somewhat different between my environment and the environment of my friends. If you encounter problems during operation, or if the article is written incorrectly, please leave a message below to point out.


Welcome to pay attention to the "MOOC" account. We will always insist on original content, provide high-quality content in the IT circle, and share dry knowledge. Let's grow together!

This article was originally published on Muke.com, please indicate the source for reprinting, thank you for your cooperation

Guess you like

Origin blog.csdn.net/mukewangguanfang/article/details/130743923