Docker resource limit and compose

Table of contents

First, the establishment of a private warehouse

 2. Cgroup resource allocation method

 3. CPU usage control

 Use stress tools to test CPU and memory

4. CPU cycle limit

Query the resource limit parameters of a container

(1) In the specified container directory

(2) Use docker inspect container ID/container name

5. CPU Core Control

6. Mixed use of CPU quota control parameters

7. Memory limit

 8. Limitations of Block IO

Nine, bps and iops limitations

10. Specify resource limits when building images (docker build)

1. Main types of resource constraints

2. Several ways of resource limitation

3. Status query of resource limit

Eleven, compose deployment

Harbor Services

12. Consul deployment

1. Consul server

2. Obtain cluster information through httpd api

3. The container service automatically joins the consul cluster

(1) Install Gliderlabs/Registrator 

(2) Test whether the service discovery function is normal

(3) Verify that http and nginx services are registered to consul

(4) Install consul-template

(5) Prepare template nginx template file

(6) Compile and install nginx

(7) Configure nginx

4. Add an nginx container node


First, the establishment of a private warehouse

docker pull registry

In the docker engine terminal settings

vim /etc/docker/daemon.json
{
"insecure-registries": ["ip网址:5000"],   添加
"registry-mirrors": ["https://05vz3np5.mirror.aliyuncs.com"]
}

systemctl restart docker.service

docker create -it registry /bin/bash

docker ps -a

would be abnormal

docker start 

The host's /data/registry automatically creates the /tmp/registry in the mount container

docker run -d -p 5000:5000 -v /data/registry:/tmp/registry registry

Change marked as ip url: 5000/nginx

docker tag nginx:latest ip网址:5000/nginx

upload

docker push ip网址:5000/nginx

The push refers to repository [ip网址:5000/nginx]

Get a list of private repositories

Get the mirror information in the registry's mirror warehouse

curl -XGET http://ip网址:5000/v2/_catalog

Test private repository download

docker pull ip网址:5000/nginx

 2. Cgroup resource allocation method

Docker uses cgroups to control resources
 

 respones
    request

        Docker uses Cgroup to control the resource quota used by containers, including CPU, memory, and disk, which basically covers common resource quotas and usage control.

        Cgroup is the abbreviation of Control Groups, which is a mechanism provided by the Linux kernel to limit, record, and isolate the physical resources (such as CPU, memory, disk IO, etc.) used by process groups.

       In 2007, Google can control resource allocation. Through the operating system kernel, control the application's use of memory resources, cpu resources, file system resources, etc.
cgroup is a resource control method
and a means of realizing the 6 namespaces of container isolation.

Each container is equivalent to a process

 3. CPU usage control

CPU cycle: 1s is the law of one cycle, and the parameter value is generally 100000 (CPU measurement unit is seconds).

       If you need to allocate 20% of the CPU usage to this container, the parameter needs to be set to 20000, which is equivalent to 0.2s allocated to this container per cycle.

The CPU can only be occupied by one process at a time.

 Use stress tools to test CPU and memory

Use the Dockerfile to create a Centos-based stress tool image.

mkdir /opt/stress

vim /opt/stress/Dockerfile

FROM centos:7
RUN yum install -y wget
RUN wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
RUN yum install -y stress

cd /opt/stress/

docker build -t centos:stress .

       Use the following command to create a container. The --cpu-shares parameter value in the command does not guarantee that 1 vcpu or how many GHz of CPU resources can be obtained, it is only an elastic weighted value.

docker run -itd --cpu-shares 100 centos:stress

        By default, each Docker container has a CPU share of 1024. A single container's share is meaningless. The effect of a container's CPU weighting is only apparent when running multiple containers at the same time.
       The CPU shares of the two containers A and B are 1000 and 500 respectively. When the CPU allocates time slices, container A has twice the chance to obtain the CPU time slice than container B.
       However, the result of the allocation depends on the running state of the host and other containers at that time, and there is no guarantee that container A will get the CPU time slice. For example, if the process of container A is always idle, container B can obtain more CPU time slices than container A. In extreme cases, such as only one container running on the host, even if its CPU share is only 50, it can monopolize the CPU resources of the entire host.

One host runs one container and only one application (container is also a virtualization technology)
One host runs one application

        Cgroups only take effect when the resources allocated by the container are in short supply, that is, when the resources used by the container need to be limited. Therefore, it is impossible to determine how much CPU resources are allocated to a container based solely on its CPU share. The
        resource allocation result depends on the CPU allocation of other containers running at the same time and the running conditions of the processes in the container.
        The CPU share can be used to set the priority/weight of the container's CPU usage, such as starting two containers and running to view the CPU usage percentage.

docker run -tid --name cpu512 --cpu-shares 512 centos:stress stress -c 10   
容器产生10个子函数进程


docker exec -it f4953c0d7e76 bash  
进入容器使用top查看cpu使用情况

再开启一个容器做比较
docker run -tid --name cpu1024 --cpu-shares 1024 centos:stress stress -c 10


docker exec -it 5590c57d27b0 bash  //进容器使用top对比两个容器的%CPU,比例是1:2

docker stats 查看资源使用

4. CPU cycle limit

Docker provides --cpu-period, --cpu-quota two parameters to control the CPU clock cycle that the container can be allocated.
--cpu-period is used to specify how often the container's CPU usage should be re-allocated.

cd /sys/fs/cgroup/cpu/docker容器ID/cpu.cfs_quota_us

How does the host provide resources and control the applications in the docker container: 
        CPU→VCPU→represented in the workstation environment (in the docker environment) in the form of a process→the docker representation is the container→Vcpu controls the container in the way of a process→in the container What the application needs is the support of the service process → the cpu in the host kernel can be managed by the cgroup (by allocating resources) → the cgroup in the linux kernel can control and manage the application in the docker container.

--cpu-quota is used to specify how much time can be used to run the container in this cycle.
Unlike --cpu-shares, this configuration specifies an absolute value, and the container's use of CPU resources will never exceed the configured value.

       The units of cpu-period and cpu-quota are microseconds (μs). The minimum value of cpu-period is 1000 microseconds, the maximum value is 1 second (10^6 μs), and the default value is 0.1 seconds (100000 μs).
      The value of cpu-quota defaults to -1, which means no control. The cpu-period and cpu-quota parameters are generally used in combination. In redis, used to represent permanent -1

ttl teacher 
-1
lrange teacher 0 -1 

         A container process needs to use a single CPU for 0.2 seconds every 1 second, you can set cpu-period to 100000 (ie 1 second) and cpu-quota to 20000 (0.2 seconds).
         Of course, in the multi-core case, if you allow the container process to fully occupy two CPUs, you can set the cpu-period to 10000 (ie 0.1 seconds) and the cpu-quota to 200000 (0.2 seconds).
 

Options describe
--put= Specifies how much of the available CPU resources the container can use. For example, if the host has two CPUs, and you set Cpus = "1.5", then the container will guarantee access to at most one and a half CPUs. This is equivalent to setting -cpu-period = "100000" and --cpu-quota = "150000". Available in Docker 1.13 and later.
--Cpu-period= Specifies the CPU CFS scheduler cycle, which is used with --pu-quota-. Defaults to 100000 microseconds, expressed in microseconds. Most users will not change this setting from the default. If you're using Docker 1.13 or higher, use --cpus instead.
--Cpu-quota= Add CPU CFS quota on container. The number of microseconds to allow CPU access to the container per --cpu-period. In other words, cpu-quota/cpu-period. If you're using Docker 1.13 or higher, use -cpuS instead.
--cpuset-cpus Limit the specific CPUs or cores a container can use. A comma-separated list or a hyphen-separated range of CPUs that the container can use if you have multiple CPUs. The first CPU is numbered 0. Valid values ​​may be 0-3 (using the first, second, third and fourth CPU) or 1,3 (using the second and fourth CPU).
--Cpu-shares Set this flag to a value greater or less than the default of 1024 to increase or decrease the weight of the container and give it access to a greater or lesser proportion of the host's CPU cycles. This will only be performed when CPU cycles are limited. All containers use as much CPU as possible when a lot of CPU cycles are available. As such, this is a soft limit. --cpu-shares does not prevent containers from scheduling in cluster mode. It prioritizes available CPU cycles for container CPU resources. It does not guarantee or preserve any specific CPU access rights.
docker run -tid --cpu-period 100000 --cpu-quota 200000 centos:stress

docker exec -it 98d2aaa50019 bash

Query the resource limit parameters of a container

(1) In the specified container directory

cat /sys/fs/cgroup/cpu/docker/容器ID/cpu.cfs_period_us

cat /sys/fs/cgroup/cpu/docker/容器ID/cpu.cfs_quota_us

(2) Use docker inspect container ID/container name

"CpuPeriod": 
 "CpuQuota": 

5. CPU Core Control

       For servers with multi-core CPUs, Docker can also control which CPU cores the container runs on by using the --cpuset-cpus parameter.
       This is especially useful for servers with multiple CPUs, allowing for performance-optimized configurations of containers that require high-performance computing.

docker run -tid --name cpu1 --cpuset-cpus 0-1 centos:stress

Executing the above command requires the host to be dual-core, which means that the created container can only use 0 and 1 cores. The CPU core configuration of the final generated cgroup

cat /sys/fs/cgroup/cpuset/docker/

Through the following instructions, you can see the binding relationship between the process in the container and the CPU core, so as to achieve the purpose of binding the CPU core.

docker exec   taskset -c -p 1    
容器内部第一个进程号pid为1被绑定到指定CPU上运行pid 1's current affinity list: 0,1

Use parameters to specify resource limits directly when creating a container

After the container is created, specify the resource allocation and modify the file /sys/fs/cgroup/*
of the host corresponding to the container resource control

6. Mixed use of CPU quota control parameters

        Through the cpuset-cpus parameter, specify that container A uses CPU core 0, and container B only uses CPU core 1.
On the host, only these two containers use the corresponding CPU cores, and they each occupy all the core resources, and cpu-shares have no obvious effect.

       The cpuset-cpus and cpuset-mems parameters are only valid on servers with multi-core and multi-memory nodes, and must match the actual physical configuration, otherwise the purpose of resource control cannot be achieved.

       When the system has multiple CPU cores, it is necessary to set the container CPU cores through the cpuset-cpus parameter for convenient testing.
      The host system is modified to a 4-core CPU

docker run -tid --name cpu3 --cpuset-cpus 1 --cpu-shares 512 centos:stress stress -c 1

docker exec -it 84598dfadd34 bash

exit


top   
按1查看每个核心的占用

docker run -tid --name cpu4 --cpuset-cpus 3 --cpu-shares 1024 centos:stress stress -c 1

docker exec -it  bash

        The centos:stress image above has the stress tool installed to test the CPU and memory load. By executing the stress -c 1 command on each of the two containers, it will give the system a random load, spawning 1 process. This process repeatedly calculates the square root of the random number generated by rand until the resource is exhausted. 
       Observing the CPU usage on the host, the usage of the third core is close to 100%, and the CPU usage of a batch of processes clearly has a 2:1 ratio of usage.

7. Memory limit

Similar to the operating system, the memory available to a container consists of two parts: physical memory and swap. 
Docker uses the following two sets of parameters to control the amount of container memory used.

-m or --memory: Set the memory usage limit, such as 100M, 1024M. 
--memory-swap: Set the usage limit of memory+swap. 
Execute the following command to allow the container to use up to 200M of memory and 300M of swap.
#Just do the hard limit of swap and physical memory

docker run -it -m 200M --memory-swap=300M centos:stress

--vm 1: Start 1 memory worker thread. 
--vm-bytes 280M: allocate 280M memory per thread. 
By default, containers can use all free memory on the host.
Similar to the cgroups configuration of the CPU, Docker will automatically
create a corresponding cgroup configuration file for the container in the directory /sys/fs/cgroup/memory/docker/<full long ID of the container>

        If the memory allocated by the worker thread exceeds 300M and the allocated memory exceeds the limit, the stress thread will report an error and the container will exit.

docker run -it -m 200M --memory-swap=300M progrium/stress --vm 1 --vm-bytes 310M

 8. Limitations of Block IO

        By default, all containers can read and write disks equally. You can change the priority of container block IO by setting the --blkio-weight parameter. 
--blkio-weight Similar to --cpu-shares, it sets the relative weight value, the default is 500.
In the example below, container A has twice the bandwidth to read and write to disk as container B.

docker run -it --name container_A --blkio-weight 600 centos:stress

cat /sys/fs/cgroup/blkio/blkio.weight

docker run -it --name container_B --blkio-weight 300 centos:stress

cat /sys/fs/cgroup/blkio/blkio.weight

Nine, bps and iops limitations

bps is byte per second, the amount of data read and written per second. 
iops is io per second, the number of IOs per second. 
The bps and iops of the container can be controlled by the following parameters:


--device-read-bps, limit reading bps of a device.
--device-write-bps, limit the bps of writing to a device.
--device-read-iops, restrict reading iops of a device.
--device-write-iops, limit write iops to a device.

Limit the rate at which the container writes to /dev/sda to 5 MB/s.

docker run -it --device-write-bps /dev/sda:5MB centos:stress

dd if=/dev/zero of=test bs=1M count=1024 oflag=direct   
可以按ctrl+c中断查看

        Test the speed of writing to disk in the container with the dd command. Because the file system of the container is on host /dev/sda, writing files in the container is equivalent to writing to host /dev/sda. In addition, oflag=direct specifies to use direct IO to write files, so that --device-write-bps can take effect.

The results show that the speed limit is about 5MB/s. As a comparison test, if the speed limit is not limited, the results are as follows.
 

docker run -it centos:stress

dd if=/dev/zero of=test bs=1M count=1024 oflag=direct

10. Specify resource limits when building images (docker build)

build-arg=[] Set variables for image creation
cpu-shares Set cpu usage weight
cpu-period Limit CPU CFS cycles
cpu-quota Limit CPU CFS Quota
cpuset-cpus Specifies the CPU id to use
cpuset-mems Specifies the memory id to use
disable-content-trust Ignore verification, enabled by default
-f Specify the Dockerfile path to use
force-rm Remove intermediate containers during mirror setup
isolation Use container isolation technology
label=[] Set the metadata used by the image
-m  set memory max
memory-swap Set the maximum value of swap to memory + swap, "-1" means unlimited swap
no-cache The process of creating an image does not use a cache
pull  Try to update the new version of the mirror
quiet, -q Quiet mode, only output mirror ID after success
rm Delete the intermediate container after setting the image successfully
shm-size Set the size of /dev/shm, the default value is 64M
decrease Ulimit configuration
squash Compress all operations in Dockerfile into one layer
tag, -t The name and label of the image, usually in the format name:tag or name; multiple tags can be set for an image in one build.
network Default default. Set network mode for RUN instruction during build

1. Main types of resource constraints

1)CPU 权重shares、quota、cpuset
2)磁盘 BPS、TPS限制,指定使用哪个磁盘、磁盘分区
3)内存 -m -swap 内存、交换分区
大部分做的是上限的限制


2.资源限制的几种方式

1)build 构建镜像时,可以指定该镜像的资源限制
2)run 将镜像跑为容器的时候,可以指定容器的资源限制

3)容器启动之后, 可以在宿主机对应容器的目录下。修改资源限制,然后重载
/sys/fs/cgroup/*(cpu、blk、mem)/docker/容器ID/→修改对应的资源限制文件参数就可以

3.资源限制的状态查询

1)docker inspect 镜像ID/容器ID 
2)直接查看宿主机对应容器ID资源限制的文件
3)docker stats

cgroup 资源 docker 原理之一 ,namespaces 6个名称空间

十一、 compose部署

Docker Compose配置常用字段

字段

描述

build dockerfile context

指定Dockerfile文件名构建镜像上下文路径

image

指定镜像

command

执行命令,覆盖默认命令

container name

指定容器名称,由于容器名称是唯一的如果指定自定

义名称,则无法scale

deploy

指定部署和运行服务相关配置,只能在Swarm模式使用

environment

添加环境变量

networks

加入网络

ports

暴露容器端口,与-p相同,但端口不能低于60

volumes

挂载宿主机路径或命令卷

restart

重启策略,默认no,always,no-failure,unless-stoped

hostname

容器主机名

Docker Compose常用命令

字段

描述

build

重新构建服务

ps

列出容器

up

创建和启动容器

exec

在容器里面执行命令

scale

指定一个服务容器启动数量

top

显示容器进程

logs

查看容器输出

down

删除容器、网络、数据卷和镜像

stop/start/restart

停止/启动/重启服务

环境部署所有主机安装docker环境(内容为docker基础)

yum install docker-ce -y

下载compose

curl -L https://github.com/docker/compose/releases/download/1.21.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose

cp -p docker-compose /usr/local/bin/

chmod +x /usr/local/bin/docker-compose

mkdir /root/compose_nginx

tree ./
./
├── docker-compose.yml        创建模板脚本
├── nginx
   ├── Dockerfile             创建容器脚本
   ├── nginx-1.15.9.tar.gz    复制源码包
└── wwwroot
    └── index.html            站点
vim /root/compose_nginx/docker-compose.yml

version: '3'
services:
  nginx:
    hostname: nginx
    build:
      context: ./nginx
      dockerfile: Dockerfile
    ports:
      - 1216:80
      - 1217:443
    networks:
      - cluster
    volumes:
      - ./wwwroot:/usr/local/nginx/html
networks:
  cluster:
docker-compose -f docker-compose.yml up -d

docker 基础操作/常规操作
1)image 容器的管理命令
2)dockerfile 
3)docker 网络 
4)docker 私有仓库
registry 
harbor 

docker-compose→资源编排和管理手段 (docker swarm)

Harbor 服务

        Harbor被部署为多个Docker 容器,因此可以部署在任何支持Docker 的Linux 发行版

上。(registry 为其核心组件)

       Harbor比registry相比好处是: harbor 支持多种功能、图形化界面管理、多用户权限、角色管理机制、安全机制。

      服务端主机需要安装Python、 Docker 和Docker Compose。(web 环境支持的是PY语言,故需要安装Python)。

1.下载Harbor 安装程序

wget http:// harbor.orientsoft.cn/habor-1.2.2/harborofline-installer-v1.2.2.tgz

tar zxvf harbor oflie-installer-v1.2.2.tgz -C /usr/local/

2.配置Harbor 参数文件

vim /us/local/harbor/harbor.cfg

第五行  hostname = 主机ip

关于Harbor.cfg 配置文件中有两类参数:所需参数和可选参数

(1)参数

所需参数这些参数需要在配置文件Harbor.cfg 中设置。

如果用户更新它们并运行install.sh 脚本重新安装Harbor,参数将生效。

具体参数

①hostname:用于访问用户界面和reeister 服务。它应该是目标机器的IP 地址或完全限定

的域名(FQDN)。

②ui url _protocol: (http 或https, 默认为http) 用于访问UI和令牌/通知服务的协议。如

果公证处于启用状态,则此参数必须为https。(身份验证时会向Mysql数据库进行比对,

然后授予令牌)

③max_ job_workers: 镜像复制作业线程。

④db_ password: 用于db_ auth的MySQL数据库root用户的密码。

⑤customize_ crt:该属性可设置为打开或关闭,默认打开。打开此属性时,准备脚本创建私钥和根证书,用于生成/验证注册表令牌。

当由外部来源提供密钥和根证书时,将此属性设置为off。

⑥ssl_cert: SSL 证书的路径,仅当协议设置为https 时才应用。

⑦ssl cert_key: SSL 密钥的路径,仅当协议设置为https 时才应用。

⑧secretkey_ path:用于在复制策略中加密或解密远程register 密码的密钥路径。

(2)可选参数

        这些参数对于更新是可选的,即用户可以将其保留为默认值,并在启动Harbor 后在Web UI上进行更新。

      如果进入Harbor.cfg, 只会在第一次启动 Harbor时生效,随后对这些参数的更新,Harbor.cfg将被忽略。

注意:如果选择通过UI设置这些参数,请确保在启动Harbour后立即执行此操作。具体来

说,必须在注册或在Harbor 中创建任何新用户之前设置所需的auth_mode。当系统中有用户时(除了默认的admin 用户),auth_mode 不能被修改。具体参数如下:

①Email: Harbor 需要该参数才能向用户发送“密码重置”电子邮件,并且只有在需要该功能

时才需要。

请注意,在默认情况下SSL连接时没有启用。如果SMTP服务器需要SSL,但不支持STARTTLS,那么应该通过设置启用SSLemailssl=TRUE。

②harbour_admin_password: 管理员的初始密码,只在Harbour第-次启动时生效。之后,此

设置将被忽略,并且应UI中设置管理员的密码。

请注意,默认的用户名/密码是admin/Harbor12345 。

③auth mode:使用的认证类型,默认情况下,它是db_auth, 即凭据存储在数据库中。对于

LDAP身份验证(以文件形式验证),请将其设置为ldap_auth。

④self_registration: 启用/禁用用户注册功能。禁用时,新用户只能由Admin 用户创建,只有

管理员用户可以在Harbour中创建新用户。

注意:当auth_mode设置为ldap_auth时,自注册功能将始终处于禁用状态,并且该标志

被忽略。

⑤Token_ expiration: 由令牌服务创建的令牌的到期时间(分钟),默认为30分钟。

project_creation. restriction: 用于控制哪些用户有权创建项目的标志。默认情况下,每个人

都可以创建一个项目。

如果将其值设置为“adminonly",那么只有admin可以创建项目。

⑥verify_remote_cert: 打开或关闭,默认打开。此标志决定了当Harbor与远程register 实例通信时是否验证SSL/TLS 证书。

        将此属性设置为off 将绕过SSL/TLS 验证,这在远程实例具有自签名或不可信证书时经常使用。

        另外,默认情况下,Harbor 将镜像存储在本地文件系统上。在生产环境中,可以考虑使用其他存储后端而不是本地文件系统,如S3、Openstack Swif、Ceph 等。但需要更新common/templates/egistry/config.yml 文件。

 

 

 

 

  

3.启动Harbor

sh /usr/local/harbor/install.sh

 打开浏览器输入主机ip即可访问harbor

4.查看Harbor启动镜像

查看镜像

docker images

查看容器

docker ps -a

cd /usr/local/harbor/

docker-compose ps

此时可使用Docker 命令在本地通过127.0.0.1 来登录和推送镜像。默认情况下,

Register服务器在端口80. 上侦听。

登录

docker login -u admin -P Harbor12345 http://127.0.0.1

下载镜像进行测试

docker pull cirros

镜像打标签

docker tag cirros 127.0.0.1/myproject-kgcirros:v1

上传镜像到Harbor

docker push 127.0.0.1/myproject-kgc/cirros:v1

        以上操作都是在Harbor 服务器本地操作。如果其他客户端上传镜像到Harbor, 就会报

如下错误。出现这问题的原因Docker Registry 交互默认使用的是HTTPS,但是搭建私有镜

像默认使用的是HTTP 服务,所以与私有镜像交互时出现以下错误。

docker login -u admin -P Harbor12345 http://主机ip

会报错

解决

vim /us/ib/systemd/system/docker.service

ExecStart=/us/bin/dockerd -H fd:// -insecure-registry 主机ip

--containerd=/run/containerd/containerd.sock
systemctl daemon-reload

systemctl restart docker

docker login -u admin -p Harbor12345 http://主机ip

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

十二、 consul部署

consul 注册中心/注册机

服务器nginx: Nginx 、Consul、 Consul-template
服务器docker: Docker-ce、registrator(自动发现、注册的组件)

template 模板(更新)
registrator(自动发现)
        后端每构建出一个容器,会向registrator进行注册,控制consul 完成更新操作,consul会触发consul template模板进行热更新。
        核心机制:consul :自动发现、自动更新,为容器提供服务(添加、删除、生命周期辅助功能)。
 

1.consul服务器

mkdir /root/consul

cp consul_0.9.2_linux_amd64.zip /root/consul

cd /root/consul

unzip consul_0.9.2_linux_amd64.zip

mv consul /usr/bin


consul agent \
-server \		                  server模式
-bootstrap \	                  前端框架(node.js)
-ui \		                      可被访问的web界面
-data-dir=/var/lib/consul-data \
-bind= \
-client=0.0.0.0 \
-node=consul-server01 &> /var/log/consul.log &

consul agent \
-server \
-bootstrap \
-ui \
-data-dir=/var/lib/consul-data \
-bind= \
-client=0.0.0.0 \
-node=consul-server01 &> /var/log/consul.log &

 

 

 

2.通过httpd api 获取集群信息

curl 127.0.0.1:8500/v1/status/peers        看集群server成员
curl 127.0.0.1:8500/v1/status/leader       集群 Raf leader
curl 127.0.0.1:8500/v1/catalog/services    注册的所有服务
curl 127.0.0.1:8500/v1/catalog/nginx       查看 nginx 服务信息
curl 127.0.0.1:8500/v1/catalog/nodes       集群节点详细信息

 

 

3.容器服务自动加入consul集群

(1)安装 Gliderlabs/Registrator 

可检查容器运行状态自动注册,还可注销 docker 容器的服务 到服务配置中心。
目前支持 Consul、Etcd 和 SkyDNS2。 
执行操作:

docker run -d \
--name=registrator \
--net=host \
-v /var/run/docker.sock:/tmp/docker.sock \
--restart=always \
gliderlabs/registrator:latest \
-ip=ip网址 \
consul://ip网址:8500

 

 

(2)测试服务发现功能是否正常

docker run -itd -p:83:80 --name test-01 -h test01 nginx
docker run -itd -p:84:80 --name test-02 -h test02 nginx
docker run -itd -p:88:80 --name test-03 -h test03 httpd
docker run -itd -p:89:80 --name test-04 -h test04 httpd

 

 

(3)验证 http 和 nginx 服务是否注册到 consul 

        浏览器输入 http://ip网址:8500,“单击 NODES”,然后单击 “consurl-server01”,会出现 5 个服务.

在consul服务器上查看服务

curl 127.0.0.1:8500/v1/catalog/services

(4)安装 consul-template

        Consul-Template 是一个守护进程,用于实时查询 Consul 集群信息,并更新文件系统 上任意数量的指定模板,生成配置文件。更新完成以后,可以选择运行 shell 命令执行更新 操作,重新加载 Nginx。Consul-Template ,可以查询 Consul 中的服务目录、Key、Key-values 等。
       这种强大的抽象功能和查询语言模板可以使 Consul-Template 特别适合动态的创建配置文件。
       创建 Apache/Nginx Proxy Balancers、Haproxy Backends

(5)准备 template nginx 模板文件

在consul上操作

vim /root/consul/nginx.ctmpl

upstream http_backend {
  {
    
    {range service "nginx"}}
   server {
    
    {.Address}}:{
    
    {.Port}};    此处引用的变量会指向后端的地址和端口(动态变化)
   {
    
    {end}}
}

server {
  listen 85;
  server_name localhost ip网址;         反向代理的IP地址(前端展示的NG服务的IP)
  access_log /var/log/nginx/kgc.cn-access.log;
  index index.html index.php;
  location / {
    proxy_set_header HOST $host;
    proxy_set_header X-Real-IP $remote_addr;         后端真实IP
    proxy_set_header Client-IP $remote_addr;    
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;     转发地址
    proxy_pass http://http_backend;
  }
}

 

 

(6)编译安装nginx

yum install gcc pcre-devel zlib-devel -y

tar zxvf nginx-1.12.0.tar.gz  -C /opt

./configure --prefix=/usr/local/nginx

make && make install

ln -s /usr/local/nginx/sbin/nginx /usr/local/sbin

(7)配置 nginx

vim /usr/local/nginx/conf/nginx.conf


http {
     include       mime.types;        默认存在的
     include  vhost/*.conf;           添加虚拟主机目录(consul动态生成的配置文件就会放在这里)
     default_type  application/octet-stream;

  

创建虚拟主机目录

mkdir /usr/local/nginx/conf/vhost

 

创建日志文件目录

mkdir /var/log/nginx

 

启动nginx

usr/local/nginx/sbin/nginx

  

(8)配置并启动 template

cp consul-template_0.19.3_linux_amd64.zip /root/

unzip consul-template_0.19.3_linux_amd64.zip

mv consul-template /usr/bin/

关联nginx 虚拟目录中的子配置文件操作

consul-template -consul-addr 192.168.226.130:8500 \
-template "/opt/consul/nginx.ctmpl:/usr/local/nginx/conf/vhost/benet.conf:/usr/local/nginx/sbin/nginx -s reload" \
--log-level=info

 

另外打开一个终端查看生成配置文件

cat /usr/local/nginx/conf/vhost/kgc.conf 
upstream http_backend {
  
   server ip网址:83;
   
   server iP网址:84;
   
}

server {
  listen 83;
  server_name localhost ip网址;
  access_log /var/log/nginx/kgc.cn-access.log;
  index index.html index.php;
  location / {
    proxy_set_header HOST $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header Client-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_pass http://http_backend;
  }
}

 

 

4.增加一个nginx容器节点

增加一个 nginx 容器节点,测试服务发现及配置更新功能
在registrator服务端注册

docker run -itd -p 85:80 --name test-05 -h test05 nginx

查看三台nginx容器日志,请求正常轮询到各个容器节点上

docker logs -f test-01
docker logs -f test-02
docker logs -f test-05

 

 

 

 

 

 

  

Guess you like

Origin blog.csdn.net/Drw_Dcm/article/details/127410327