Detailed explanation of DockerCompose+yaml of Docker notes

Docker Compose

Introduction to Compose

Compose is a tool for defining and running multi-container Docker applications. With Compose, you can use YML files to configure all services required by your application. Then, with one command, all services can be created and started from the YML file configuration.

If you still don't understand YML file configuration, you can read the YAML introductory tutorial first.

Three steps used by Compose:

使用 Dockerfile 定义应用程序的环境。

使用 docker-compose.yml 定义构成应用程序的服务,这样它们可以在隔离环境中一起运行。

最后,执行 docker-compose up 命令来启动并运行整个应用程序。

The configuration case of docker-compose.yml is as follows (refer to the configuration parameters below):

Instance

yaml configuration example
version: '3'
services:
  web:
    build: .
    ports:
   - "5000:5000"
    volumes:
   - .:/code
    - logvolume01:/var/log
    links:
   - redis
  redis:
    image: redis
volumes:
  logvolume01: {}

use

1. Prepare to
create a test directory:

$ mkdir composetest
$ cd composetest

Create a file named app.py in the test directory, and copy and paste the following content:

composetest/app.py file code

import time
import redis
from flask import Flask

app = Flask(__name__)
cache = redis.Redis(host='redis', port=6379)

def get_hit_count():
    retries = 5
    while True:
        try:
            return cache.incr('hits')
        except redis.exceptions.ConnectionError as exc:
            if retries == 0:
                raise exc
            retries -= 1
            time.sleep(0.5)

@app.route('/')
def hello():
    count = get_hit_count()
    return 'Hello World! I have been seen {} times.\n'.format(count)

In this example, redis is the host name of the redis container on the application network, and the port used by the host is 6379.

Create another file named requirements.txt in the composetest directory with the following content:

flask
redis

2. Create a Dockerfile file
In the composetest directory, create a file named Dockerfile with the following content:

FROM python:3.7-alpine
WORKDIR /code
ENV FLASK_APP app.py
ENV FLASK_RUN_HOST 0.0.0.0
RUN apk add --no-cache gcc musl-dev linux-headers
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["flask", "run"]

Dockerfile content explanation:

FROM python:3.7-alpine: 从 Python 3.7 映像开始构建镜像。
WORKDIR /code: 将工作目录设置为 /code。
ENV FLASK_APP app.py
ENV FLASK_RUN_HOST 0.0.0.0
设置 flask 命令使用的环境变量。

RUN apk add --no-cache gcc musl-dev linux-headers: 安装 gcc,以便诸如 MarkupSafe 和 SQLAlchemy 之类的 Python 包可以编译加速。
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
复制 requirements.txt 并安装 Python 依赖项。

COPY . .: 将 . 项目中的当前目录复制到 . 镜像中的工作目录。
CMD ["flask", "run"]: 容器提供默认的执行命令为:flask run。

3. Create docker-compose.yml
Create a file named docker-compose.yml in the test directory, and paste the following content:

docker-compose.yml configuration file

# yaml 配置
version: '3'
services:
  web:
    build: .
    ports:
     - "5000:5000"
  redis:
    image: "redis:alpine"

The Compose file defines two services: web and redis.

web: This web service uses the image built from the current directory of the Dockerfile. Then, it binds the container and host to the exposed port 5000. This sample service uses the default port 5000 of the Flask web server.
redis: This redis service uses the public Redis image of Docker Hub.
4. Use the Compose command to build and run your application.
In the test directory, execute the following command to start the application:

docker-compose up

If you want to execute the service in the background, you can add the -d parameter:

docker-compose up -d

The yml configuration command refers to
version to
specify which version of compose this yml complies with.

build is
specified as the build image context path:

For example, the webapp service is specified as an image built from the context path ./dir/Dockerfile:

version: "3.7"
services:
  webapp:
    build: ./dir

Or, as an object with a path specified in the context, and optional Dockerfile and args:

version: "3.7"
services:
  webapp:
    build:
      context: ./dir
      dockerfile: Dockerfile-alternate
      args:
        buildno: 1
      labels:
        - "com.example.description=Accounting webapp"
        - "com.example.department=Finance"
        - "com.example.label-with-empty-value"
      target: prod
  • context: context path.
  • Dockerfile: Specify the name of the Dockerfile to build the image.
  • args: Add build parameters, which are environment variables that can only be accessed during the build process.
  • labels: Set the labels for building the image.
  • target: Multi-layer construction, you can specify which layer to build.
  • cap_add,cap_drop
  • Add or delete the kernel functions of the host machine owned by the container.

cap_add:

  • ALL # Turn on all permissions

cap_drop:

  • SYS_PTRACE # Turn off ptrace permission
    cgroup_parent
    specifies the parent cgroup group for the container, which means that the resource limit of this group will be inherited.
cgroup_parent: m-executor-abcd

command
overrides the default command started by the container.

command: ["bundle", "exec", "thin", "-p", "3000"]

container_name
specifies a custom container name instead of the generated default name.

container_name: my-web-container

depends_on
sets dependencies.
docker-compose up: Start services in dependency order. In the following example, db and redis are started before web is started.
docker-compose up SERVICE: automatically include the dependencies of SERVICE. In the following example, docker-compose up web will also create and start db and redis.
docker-compose stop: Stop services in the order of dependencies. In the following example, web stops before db and redis.

version: "3.7"
services:
  web:
    build: .
    depends_on:
      - db
      - redis
  redis:
    image: redis
  db:
    image: postgres

Note: The web service does not wait for redis db to start completely.

Deploy
specifies the configuration related to the deployment and operation of the service. Only useful in swarm mode.

version: "3.7"
services:
  redis:
    image: redis:alpine
    deploy:
      mode:replicated
      replicas: 6
      endpoint_mode: dnsrr
      labels: 
        description: "This redis service label"
      resources:
        limits:
          cpus: '0.50'
          memory: 50M
        reservations:
          cpus: '0.25'
          memory: 20M
      restart_policy:
        condition: on-failure
        delay: 5s
        max_attempts: 3
        window: 120s

Optional parameters:

  • endpoint_mode: the way to access cluster services.

    • endpoint_mode: vip
      Docker cluster serves an external virtual IP. All requests will reach the machines inside the cluster service through this virtual ip.
    • endpoint_mode: dnsrr
      DNS polling (DNSRR). All requests will be automatically polled to obtain an IP address in the cluster IP list.
  • labels: Set labels on the service. You can use the labels on the container (the configuration at the same level as deploy) to overwrite the labels under deploy.

  • mode: Specifies the mode of service provision.

    • replicated: Replication service, which replicates the specified service to the cluster machines.

    • global: Global service, the service will be deployed to each node of the cluster.

Illustration: The yellow square in the figure below is the operation of replicated mode, and the gray square is the operation of global mode.
Insert picture description here

  • replicas: When the mode is replicated, you need to use this parameter to configure the number of specific running nodes.

  • resources: Configure the limits of server resource usage, such as the above example, configure the percentage of CPU and memory usage for redis cluster operation. Avoid excessive resource usage and abnormalities.

  • restart_policy: Configure how to restart the container when it exits.

    • condition: optional none, on-failure or any (default: any).
    • delay: Set how long to restart after setting (default value: 0).
    • max_attempts: The number of attempts to restart the container. If the number is exceeded, no more attempts (default: always retry).
    • window: Set the container restart timeout (default value: 0).
    • rollback_config: Configure how to roll back the service if the update fails.
  • Parallelism: The number of containers to be rolled back at a time. If set to 0, all containers will be rolled back at the same time.

    • delay: The time to wait between rollbacks of each container group (default is 0s).
    • failure_action: What to do if the rollback fails. One of continue or pause (pause by default).
    • monitor: After each container is updated, the time to continuously observe whether it fails (ns|us|ms|s|m|h) (default is 0s).
    • max_failure_ratio: The failure rate that can be tolerated during the rollback (default is 0).
    • order: The order of operations during rollback. One of stop-first (serial rollback), or start-first (parallel rollback) (default stop-first).
    • update_config: Configure how the service should be updated, which is useful for configuring rolling updates.
  • Parallelism: The number of containers updated at one time.

    • delay: The time to wait between updating a set of containers.
    • failure_action: What to do if the update fails. One of continue, rollback or pause (default: pause).
    • monitor: After each container is updated, the time to continuously observe whether it fails (ns|us|ms|s|m|h) (default is 0s).
    • max_failure_ratio: the failure rate that can be tolerated during the update process.
    • order: The order of operations during rollback. One of stop-first (serial rollback), or start-first (parallel rollback) (default stop-first).
      Note: Only V3.4 and later versions are supported.
  • devices
    specifies the device mapping list.

devices:
  - "/dev/ttyUSB0:/dev/ttyUSB0"
  • dns
    Custom DNS server, which can be a single value or multiple values ​​in a list.
dns: 8.8.8.8

dns:
  - 8.8.8.8
  - 9.9.9.9
  • dns_search
    custom DNS search domain. Can be a single value or a list.
dns_search: example.com

dns_search:
  - dc1.example.com
  - dc2.example.com
  • The entrypoint
    overrides the default entrypoint of the container.
entrypoint: /code/entrypoint.sh

It can also be in the following format:

entrypoint:
    - php
    - -d
    - zend_extension=/usr/local/lib/php/extensions/no-debug-non-zts-20100525/xdebug.so
    - -d
    - memory_limit=-1
    - vendor/bin/phpunit
  • env_file
    adds environment variables from the file. It can be a single value or a list of multiple values.
env_file: .env

It can also be in list format:

env_file:
  - ./common.env
  - ./apps/web.env
  - /opt/secrets.env
  • environment
    Add environment variables. You can use arrays or dictionaries, any Boolean values, Boolean values ​​need to be enclosed in quotation marks to ensure that the YML parser will not convert them to True or False.
environment:
  RACK_ENV: development
  SHOW: 'true'
  • Expose the
    port, but it is not mapped to the host, only accessed by connected services.

Only internal ports can be specified as parameters:

expose:
 - "3000"
 - "8000"
  • extra_hosts
    adds host name mapping. Similar to docker client --add-host.
extra_hosts:
 - "somehost:162.242.195.82"
 - "otherhost:50.31.209.229"

The above will create a mapping relationship with ip address and host name in /etc/hosts in the internal container of this service:

162.242.195.82  somehost
50.31.209.229   otherhost
  • healthcheck is
    used to detect whether the docker service is running healthy.
healthcheck:
  test: ["CMD", "curl", "-f", "http://localhost"] # 设置检测程序
  interval: 1m30s # 设置检测间隔
  timeout: 10s # 设置检测超时时间
  retries: 3 # 设置重试次数
  start_period: 40s # 启动后,多少秒开始启动检测程序
  • image
    specifies the image that the container is running on. The following formats are all possible:
image: redis
image: ubuntu:14.04
image: tutum/influxdb
image: example-registry.com:4000/postgresql
image: a4bc65fd # 镜像id
  • Logging
    configuration of the logging service.

driver: Specify the logging driver of the service container. The default value is json-file. There are three options

driver: "json-file"
driver: "syslog"
driver: "none"

Only under the json-file driver, the following parameters can be used to limit the number and size of logs.

logging:
  driver: json-file
  options:
    max-size: "200k" # 单个文件大小为200k
    max-file: "10" # 最多10个文件

When the file limit is reached, the old files will be deleted automatically.

Under the syslog driver, you can use syslog-address to specify the log receiving address.

logging:
  driver: syslog
  options:
    syslog-address: "tcp://192.168.0.42:123"
  • network_mode
    sets the network mode.
network_mode: "bridge"
network_mode: "host"
network_mode: "none"
network_mode: "service:[service name]"
network_mode: "container:[container name/id]"
  • networks

Configure the network that the container is connected to, refer to the entry under the top-level networks.

services:
  some-service:
    networks:
      some-network:
        aliases:
         - alias1
      other-network:
        aliases:
         - alias2
networks:
  some-network:
    # Use a custom driver
    driver: custom-driver-1
  other-network:
    # Use a custom driver which takes special options
    driver: custom-driver-2

aliases: Other containers on the same network can use the service name or this alias to connect to the service of the corresponding container.

  • restart
    no: is the default restart strategy, the container will not be restarted under any circumstances.
    always: The container always restarts.
    on-failure: When the container exits abnormally (exit status is not 0), the container will be restarted.
    unless-stopped: Always restart the container when it exits, but do not consider containers that have been stopped when the Docker daemon is started
    restart: “no”
    restart: always
    restart: on-failure
    restart: unless-stopped
    Note: swarm cluster Mode, please use restart_policy instead.

  • Secrets
    store sensitive data, such as passwords:

version: "3.1"
services:

mysql:
  image: mysql
  environment:
    MYSQL_ROOT_PASSWORD_FILE: /run/secrets/my_secret
  secrets:
    - my_secret

secrets:
  my_secret:
    file: ./my_secret.txt
  • security_opt
    modifies the default schema tag of the container.
security-opt:
  - label:user:USER   # 设置容器的用户标签
  - label:role:ROLE   # 设置容器的角色标签
  - label:type:TYPE   # 设置容器的安全策略标签
  - label:level:LEVEL  # 设置容器的安全等级标签
  • stop_grace_period
    specifies how long to wait before sending a SIGKILL signal to close the container after the container cannot process SIGTERM (or any stop_signal signal).
stop_grace_period: 1s # 等待 1 秒
stop_grace_period: 1m30s # 等待 1 分 30 秒 
默认的等待时间是 10 秒。
  • stop_signal
    sets an alternative signal to stop the container. SIGTERM is used by default.

The following example uses SIGUSR1 instead of signal SIGTERM to stop the container.

stop_signal: SIGUSR1
  • sysctls
    sets the kernel parameters in the container, you can use array or dictionary format.
sysctls:
  net.core.somaxconn: 1024
  net.ipv4.tcp_syncookies: 0

sysctls:
  - net.core.somaxconn=1024
  - net.ipv4.tcp_syncookies=0
  • tmpfs
    installs a temporary file system in the container. It can be a single value or a list of multiple values.
tmpfs: /run

tmpfs:
  - /run
  - /tmp
  • ulimits

Override the default ulimit of the container.

ulimits:
  nproc: 65535
  nofile:
    soft: 20000
    hard: 40000
  • volumes
    to mount a volume or a file of data to the host container.
version: "3.7"
services:
  db:
    image: postgres:latest
    volumes:
      - "/localhost/postgres.sock:/var/run/postgres/postgres.sock"
      - "/localhost/data:/var/lib/postgresql/data"

Guess you like

Origin blog.csdn.net/BigData_Mining/article/details/108316515