Docker single-node service orchestration deployment guide (docker-compose)

Docker-Compose multi-container deployment tool

overview

The Docker-Compose project is Docker's official open source project and is a tool for defining and running multi-container Docker applications. Responsible for fast orchestration of Docker container clusters. With Compose, you can use YML files to configure all the services your application needs. Then, with a single command, all services can be created and started from the YML file configuration.

Docker-Compose divides the managed containers into three layers, namely project, service and container. All the files in the Docker-Compose running directory (docker-compose.yml, extends file or environment variable file, etc.) form a project. If there is no special designation, the project name is the current directory name.

A project can contain multiple services, and each service defines the image, parameters, and dependencies of the container running. A service can include multiple container instances. Docker-Compose does not solve the problem of load balancing, so other tools are needed to realize service discovery and load balancing.

The default project configuration file of Docker-Compose is docker-compose.yml, and the configuration file can be customized through the environment variable COMPOSE_FILE or the -f parameter, which defines multiple dependent services and the running container of each service.

Using a Dockerfile template file allows users to easily define a separate application container. In work, we often encounter situations that require multiple containers to cooperate with each other to complete a certain task. For example, to implement a Web project, in addition to the Web service container itself, it is often necessary to add a back-end database service container, and even a load balancing container.

Compose allows users to define a set of associated application containers as a project through a separate docker-compose.yml template file (YAML format). The Docker-Compose project is written in Python and calls the API provided by the Docker service to manage the container. Therefore, as long as the operating platform supports the Docker API, Compose can be used for orchestration management on it.


Docker-Compose installation and uninstallation

Docker-compose installation

  • To install Docker Compose, you can automatically download the adapted version of Compose through the following command, and add execution permissions to the installation script
# 要安装其他版本的 Compose,请替换 v2.2.2。
sudo curl -L "https://get.daocloud.io/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

chmod +x /usr/local/bin/docker-compose
  • Check if the installation was successful
docker-compose -v
  • Create a soft link:
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

Docker-compose uninstall

apt-get remove docker-compose

docker-compose common commands

  • ps : list all running containers
docker-compose ps
  • logs : view service log output
docker-compose logs [options] [SERVICE...]
# 选项包括
–no-color	关闭颜色。默认情况下,docker-compose将对不同的服务输出使用不同的颜色来区分
-f 			跟踪日志输出
。
# 示例
docker-compose logs
  • up : Use the docker-compose.yaml file in the current directory to build, start, and update the container

    • When the configuration of a service changes, use the docker-compose up command to update the configuration

      At this point, Compose will delete the old container and create a new container. The new container will join the network with a different IP address, and the name will remain the same. Any connection to the old container will be closed, and the new container will be found and connected.

docker-compose up [options] [--scale SERVICE=NUM...] [SERVICE...]
# 选项包括:
-f					# 指定yml部署模板文件
-d 					# 在后台运行服务容器
--force-recreate 	# 强制重新创建容器,不能与-no-recreate同时使用
-no-recreate 		# 如果容器已经存在,则不重新创建,不能与–force-recreate同时使用
-no-color 		# 不用颜色来区分不同的服务的控制输出
-no-deps 		# 不启动服务所链接的容器
-no-build 		# 不自动构建缺失的服务镜像
-build 			# 在启动容器前构建服务镜像
-V, --renew-anon-volumes		# 重新创建匿名卷,而不是从以前的容器中检索数据
-abort-on-container-exit 		# 如果任何一个容器被停止,则停止所有容器。不能与-d同时使用
-t, -timeout TIMEOUT 			# 停止容器时候的超时(默认为10秒)
-remove-orphans 				# 删除服务中没有在compose.yaml文件中定义的容器
--scale SERVICE=NUM			# 将Compose.yaml中的SERVICE服务扩展到NUM个实例,快速实现一个负载均衡(单节点)
								# 该参数会覆盖Compose.yaml文件中的“scale”设置(如果存在)。
								# 注意:如果Compose.yaml中设置了ports配置(端口绑定)
								# ,当使用scale参数拓展到多个实例时,会端口冲突,需删除Compose.yaml中的ports配置

# 示例
docker-compose up --force-recreate -d
  • port : Displays the public port to which a container port is mapped
docker-compose port [options] SERVICE PRIVATE_PORT
# 选项包括:
–protocol=proto		指定端口协议,TCP(默认值)或者UDP
–index=index		如果同意服务存在多个容器,指定命令对象容器的序号(默认为1)
# 示例:下面命令可以输出 eureka 服务 8761 端口所绑定的公共端口
docker-compose port eureka 8761
  • build : Build or rebuild the service container.

    Once the service container is built, it will be given a tag name. You can run docker-compose build in the project directory at any time to rebuild the service

docker-compose build [options] [--build-arg key=val...] [SERVICE...]
# 选项包括:
–compress 	通过gzip压缩构建上下环境
–force-rm 	删除构建过程中的临时容器
–no-cache 	构建镜像过程中不使用缓存
–pull 		始终尝试通过拉取操作来获取更新版本的镜像
-m, –memory MEM		为构建的容器设置内存大小
–build-arg key=val	为服务设置build-time变量
# 示例:
docker-compose build
  • stop : Stop the container of the service that is already running
docker-compose stop [options] [SERVICE...]
# 选项包括
-t, –timeout TIMEOUT 停止容器时候的超时(默认为10秒)
# 示例
docker-compose stop eureka
  • start : Start the container where the specified service already exists
docker-compose start [SERVICE...]
# 示例
docker-compose start eureka
  • restart : restart the service in the project
docker-compose restart [options] [SERVICE...]
# 选项包括:
-t, –timeout TIMEOUT	指定重启前停止容器的超时(默认为10秒)
# 示例:
docker-compose restart
  • rm : Remove all or specified (stopped) service containers
docker-compose rm [options] [SERVICE...]
# 选项包括:
–f, –force	强制直接删除,包括非停止状态的容器
-v			删除容器所挂载的数据卷
# 示例
docker-compose rm eureka
  • kill : Forcefully stop the container of the specified service by sending a SIGKILL signal
docker-compose kill [options] [SERVICE...]
# 选项包括:
-s	来指定发送的信号
# 示例
docker-compose kill eureka
docker-compose kill -s SIGINT
  • push : The image that the push service depends on
docker-compose push [options] [SERVICE...]
# 选项包括:
–ignore-push-failures  忽略推送镜像过程中的错误
  • pull : download service mirror
docker-compose pull [options] [SERVICE...]
# 选项包括:
–ignore-pull-failures	忽略拉取镜像过程中的错误
–parallel	多个镜像同时拉取
–quiet		拉取镜像过程中不打印进度信息
# 示例
docker-compose pull
  • scale : Set the number of running containers for the specified service, specified in the form of service=num
docker-compose scale user=3 movie=3
  • run : Execute a command on a service
docker-compose run web bash
  • down : Stop and delete all containers, networks, volumes, images.
docker-compose down [options]
# 选项包括:
–rmi type		删除镜像,类型必须是:all,删除compose文件中定义的所有镜像;local,删除镜像名为空的镜像
-v, –volumes	删除已经在compose文件中定义的和匿名的附在容器上的数据卷
–remove-orphans	删除服务中没有在compose中定义的容器
# 示例
docker-compose down
  • pause : pause a service container
docker-compose pause [SERVICE...]
  • uppause : Resume a service that is in a paused state
docker-compose unpause [SERVICE...]
  • config : verify and view compose file configuration
docker-compose config [options]
# 选项包括:
–resolve-image-digests 	将镜像标签标记为摘要
-q, –quiet 		只验证配置,不输出。 当配置正确时,不输出任何内容,当文件配置错误,输出错误信息
–services 		打印服务名,一行一个
–volumes 		打印数据卷名,一行一个
  • create : create a container for the service
docker-compose create [options] [SERVICE...]
# 选项包括:
–force-recreate		重新创建容器,即使配置和镜像没有改变,不兼容–no-recreate参数
–no-recreate		如果容器已经存在,不需要重新创建,不兼容–force-recreate参数
–no-build		不创建镜像,即使缺失
–build			创建容器前,生成镜像

Docker-compose template file

Introduction and sample templates

Compose allows users to define a group of associated application containers as a project through a docker-compose.yml template file (YAML format). A Compose template file is a YAML file that defines services, networks, and volumes. The default path of the Compose template file is docker-compose.yml in the current directory, and you can use .yml or .yaml as the file extension.

The Docker-Compose standard template file should contain three parts: version, services, and networks, and the most critical parts are services and networks.

sample template

version: '3'

networks:
  front-tier:
    driver: bridge
  back-tier:
    driver: bridge

services:
  web:
    image: dockercloud/hello-world
    ports:
      - 8080
    networks:
      - front-tier
      - back-tier

  redis:
    image: redis
    links:
      - web
    networks:
      - back-tier

  lb:
    image: dockercloud/haproxy
    ports:
      - 80:80
    links:
      - web
    networks:
      - front-tier
      - back-tier
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock 

Compose currently has three versions: Version 1, Version 2, and Version 3. Compose distinguishes between Version 1 and Version 2 (Compose 1.6.0+, Docker Engine 1.10.0+). Version 2 supports more commands. Version 1 will be deprecated in the future.


start application

Create a webapp directory, copy the docker-compose.yaml file to the webapp directory, and use docker-compose to start the application.

docker-compose up -d

Docker-compose.yml configuration instructions

  • version : Specify the writing format of the docker-compose.yml file

  • services : a collection of multiple containers

  • build : specified as the build image context path. Both relative and absolute paths are acceptable.

    In addition to the specified image, the service can also be based on a Dockerfile, and execute the build task when using up to start. The build label is build, and the path of the folder where the Dockerfile is located can be specified. Compose will use the Dockerfile to automatically build the image, and then use the image to start the service container.

    • context : context path.
    • dockerfile : Specify the Dockerfile file name for building the image. When using dockerfile to build, you must specify the build path
    • args : Add build arguments, which are environment variables that can only be accessed during the build process. optional
    • labels : Set the labels of the built image.
    • target : Multi-layer construction, you can specify which layer to build.
build: ./dir
---------------
build:
    context: /path/to/build/dir
    dockerfile: Dockerfile
    args:
      buildno: 1
    labels:
      - "com.example.description=Accounting webapp"
      - "com.example.department=Finance"
      - "com.example.label-with-empty-value"
    target: prod
    
# build都是一个目录,如果要指定Dockerfile文件需要在build标签的子级标签中使用dockerfile标签指定。
# 如果同时指定image和build两个标签,那么Compose会构建镜像并且把镜像命名为image值指定的名字。
  • container_name : Specify a custom container name instead of the generated default (<project name><service name><serial number>).
container_name: my-web-container
  • command : override the command executed by default after the container starts

    When running a container with the command line, some containers need to add additional command line parameters to the command, which needs to be added to the command in the service of the compose file. There is no fixed mode for this parameter. It is recommended to follow the instructions of the corresponding container image to determine whether and what to add.

command: bundle exec thin -p 3000
----------------------------------
command: ["bundle","exec","thin","-p","3000"]
  • cap_add, cap_drop : Add or remove the host's kernel capabilities owned by the container.
  • depends_on : Set dependencies.
    • docker-compose up : Start services in dependency order. In the following example, db and redis are started before web is started.
    • docker-compose up SERVICE : Automatically include SERVICE's dependencies. In the following example, docker-compose up web will also create and start db and redis.
    • docker-compose stop : Stop services in dependency order. In the following example, web stops before db and redis.
    • Note: The web service does not wait for the redis db to be fully started before starting.
cap_add:
  - ALL # 开启全部权限

cap_drop:
  - SYS_PTRACE # 关闭 ptrace权限
version: "3.7"
services:
  web:
    build: .
    depends_on:
      - db
      - redis
  redis:
    image: redis
  db:
    image: postgres
  • dns : configure dns server, can be a value or a list
dns: 8.8.8.8
------------
dns:
    - 8.8.8.8
    - 9.9.9.9
  • dns_search : Configure the DNS search domain, which can be a value or a list
dns_search: example.com
------------------------
dns_search:
    - dc1.example.com
    - dc2.example.com
  • env_file : Get environment variables from the file, you can specify a file path or path list, its priority is lower than the environment variable specified by environment
env_file: .env
---------------
env_file:
    - ./common.env
  • environment Add environment variables. Arrays or dictionaries can be used, any boolean value, boolean values ​​need to be enclosed in quotes to ensure that the YML parser does not convert them to True or False
environment:
    RACK_ENV: development
    SHOW: 'ture'
-------------------------
environment:
    - RACK_ENV=development
    - SHOW=ture
  • expose : Expose the port, but not mapped to the host, only accessed by connected services.
expose:
    - "3000"
    - "8000"
  • ports : Port definition exposed to the outside world, corresponding to expose

    Note: The ports attribute is a mapping between the physical machine and the service in the cluster network (not the ingress). When a container accesses an address in the ingress network, it cannot be forwarded according to the port.

    If ports are used, the service will generate two virtual IPs, one is the overlay network virtual IP for inter-service communication, and the other is the ingress network virtual IP used to map service ports to physical machine ports. At this time, the IP registered by the service to the registration center (such as nacos) is likely to be the ingress network virtual IP, which will eventually lead to failure (timeout) of gateway routing and feign calls based on registration center services.

    The solution is:

    1. If you do not need to expose the port to the physical machine, only use expose to expose the port to the cluster network, which can register the correct virtual ip to the registration center

    2. If the port must be exposed to the physical machine, set the virtual network segment of the container in the configuration file

      spring:
        application:
          name: @artifactId@
        cloud:
          inetutils:
            ignored-interfaces: eth.*
            preferred-networks: 192.168.0
      

      For details, see: https://my.oschina.net/woniuyi/blog/4984748

# 暴露端口信息。格式:- "宿主机端口:容器暴露端口",或者只是指定容器的端口,宿主机会随机映射端口。
ports:
- "8763:8763"
- "8763:8763"

# 当使用HOST:CONTAINER格式来映射端口时,如果使用的容器端口小于60可能会得到错误得结果,因为YAML将会解析xx:yy这种数字格式为60进制。所以建议采用字符串格式。
  • extra_hosts : Add in-container hostname mappings. Similar to docker client --add-host.
extra_hosts:
 - "somehost:162.242.195.82"
 - "otherhost:50.31.209.229"
 
 
# 以上会在此服务的内部容器中 /etc/hosts 创建一个具有 ip 地址和主机名的映射关系:
162.242.195.82  somehost
50.31.209.229   otherhost
  • healthcheck : Used to detect whether the docker service is running healthily.
healthcheck:
  test: ["CMD", "curl", "-f", "http://localhost"] # 设置检测程序
  interval: 1m30s # 设置检测间隔
  timeout: 10s # 设置检测超时时间
  retries: 3 # 设置重试次数
  start_period: 40s # 启动后,多少秒开始启动检测程序
  • image : Specifies the image name or image ID used by the service. If the image does not exist locally, Compose will try to pull the image.
# 以下格式都可以:
image: redis
image: ubuntu:14.04
image: tutum/influxdb
image: example-registry.com:4000/postgresql
image: a4bc65fd # 镜像id
  • links : Connect the specified container to the current connection, and you can set an alias to avoid the connection failure caused by the dynamic change of the container restart caused by the ip method.

    When using the command line to deploy containers, it is often necessary to access another container in one container. At this time, the --link parameter needs to be used. The general usage form is --link [引用的其他容器名]:[容器内代表引用容器的字段]. The A container referenced by the --link A:B form will appear in the /etc/hosts file in the container in the form of [A's ip:port]:'B'. That is to say, other container A introduced by --link assigns its own address and port to field B. When A needs to be obtained in the container, field B can be used directly.

    Note: In the compose file of the v3 version, the links attribute will be automatically ignored by the docker stack deploy command, because in v3, the link function has been removed. An alternative is the networks property.

links:
    # 指定服务名称:别名 
    - docker-compose-eureka-server:compose-eureka
    # 仅指定服务名称(同时作为别名)
    - db
  • logs : log output information
--no-color          单色输出,不显示其他颜.
-f, --follow        跟踪日志输出,就是可以实时查看日志
-t, --timestamps    显示时间戳
--tail              从日志的结尾显示,--tail=200
  • logging : log output control

    On linux, container logs are generally stored under /var/lib/docker/containers/CONTAINER_ID/, files ending with json.log

logging:
  driver: "json-file"
  options:
    max-size: "5g"        # 日志文件的大小限制在5GB 
  • network_mode : set the network mode
network_mode: "bridge"
network_mode: "host"
network_mode: "none"
network_mode: "service:[service name]"
network_mode: "container:[container name/id]"
  • networks : Configure the network that the container is connected to, refer to the entries under the top-level networks.
    • Create a network parameter parallel to the services parameter. Under the network parameter, one or more networks can be set up to form a network list of the network.
      • external: true : You can use this attribute to refer to the created external network to achieve cross-stack network sharing
    • The networks attribute can be added in each service, and this attribute can include one or more network names in the previously established network list, indicating that the service can be accessed in other services by using the names of these services in these networks. Sometimes due to various reasons, the name of the service is inconsistent with the fields accessed in other containers. At this time, you can add a level after the network name, aliases, and the alias can play the same role as the name of the service itself, and be accessed by other services in the network to which it belongs.
      • aliases : Other containers on the same network can use the service name or this alias to connect to the corresponding container's service.
networks:
  some-network:
    # 使用自定义驱动程序
    driver: custom-driver-1
  other-network:
    # 引用外部网络
    external: true
services:
  some-service:
    networks:
      some-network:
        aliases:
         - alias1
      other-network:
        aliases:
         - alias2
  • restart

    • no: is the default restart strategy, and the container will not be restarted under any circumstances.
    • always: The container is always restarted.
    • on-failure: When the container exits abnormally (exit status is not 0), the container will be restarted.
    • unless-stopped: Always restart the container when the container exits, but does not consider containers that have been stopped when the Docker daemon starts

    Note: For swarm cluster mode, please use restart_policy instead.

restart: "no"
------------------------------
restart: always
------------------------------
restart: on-failure
------------------------------
restart: unless-stopped
  • security_opt : Modify the default schema label of the container.
security-opt:
  - label:user:USER   # 设置容器的用户标签
  - label:role:ROLE   # 设置容器的角色标签
  - label:type:TYPE   # 设置容器的安全策略标签
  - label:level:LEVEL  # 设置容器的安全等级标签
  • tmpfs : Mount a temporary file system inside the container. Can be a single value or a list of multiple values.
tmpfs: /run
------------------------------
tmpfs:
  - /run
  - /tmp
  • volumes : Mount the host's data volumes or files into the container

    You can directly use the [HOST:CONTAINER] format, or use the [HOST:CONTAINER:ro] format. For the container, the data volume is read-only, which can effectively protect the host's file system. Compose's data volume specified path can be a relative path, use . or ... to specify a relative directory.

    volumes corresponds to the -v option in the command line, and its meaning has not changed. However, since the volume exists in the form of a string in the yml file, writing like $PWD is not acceptable and must be changed to a relative path or an absolute path.

# 数据卷的格式可以是下面多种形式
volumes:
  # 只是指定一个路径,Docker 会自动在创建一个数据卷(这个路径是容器内部的)。
  - /var/lib/mysql
  # 使用绝对路径挂载数据卷
  - /opt/data:/var/lib/mysql
  # 以 Compose 配置文件为中心的相对路径作为数据卷挂载到容器。
  - ./cache:/tmp/cache
  # 使用用户的相对路径(~/ 表示的目录是 /home/<用户目录>/ 或者 /root/)。
  - ~/configs:/etc/configs/:ro
  # 已经存在的命名的数据卷。
  - datavolume:/var/lib/mysql
  
# 如果不使用宿主机的路径,可以指定一个volume_driver。
  • volumes_from : Mount its data volumes from another service or container
volumes_from:
   - service_name    
     - container_name
  • ulimits : Override the container's default ulimits.
ulimits:
  nproc: 65535
  nofile:
    soft: 20000
    hard: 40000

Guess you like

Origin blog.csdn.net/footless_bird/article/details/123817170