Docker#Docker container advanced operation

Advanced container

One: Docker resource limit

1. System stress test tool

stress
is a stress test tool under linux, specifically for those high loads and completely user wants to test their system monitor these devices to run.
installation

[root@xingdian ~]# yum install stress -y

Examples of test scenarios

测试CPU负荷
[root@xingdian ~]# stress –c 4
增加4个cpu进程,处理sqrt()函数函数,以提高系统CPU负荷
内存测试
[root@xingdian ~]# stress –i 4 --vm 10 --vm-bytes 1G --vm-hang 100 --timeout 100s
新增4个io进程,10个内存分配进程,每次分配大小1G,分配后不释放,测试100S
磁盘I/O测试
# stress –d 1 --hdd-bytes 3G
新增1个写进程,每次写3G文件块
硬盘测试(不删除)
# stress -i 1 -d 10 --hdd-bytes 3G –hdd-noclean
新增1个IO进程,10个写进程,每次写入3G文件块,且不清除,会逐步将硬盘耗尽。

Description of the main parameters of stress (- means followed by a midline, – means followed by two mid-lines, both can be used for stress followed by parameters, different expressions):

--help 显示帮助信息
--version 显示软件版本信息
-t secs:
--timeout secs指定运行多少秒
-c forks:
--cpu forks 产生多个处理sqrt()函数的CPU进程
-m forks
--vm forks:产生多个处理malloc()内存分配函数的进程,后接进程数量
-i forks
--io forks:产生多个处理sync()函数的磁盘I/O进程
--vm-bytes bytes:指定内存的byte数,默认值是1
--vm-hang:表示malloc分配的内存多少时间后在free()释放掉
-d :
--hdd:写进程,写入固定大小,通过mkstemp()函数写入当前目录
--hdd-bytes bytes:指定写的byte数,默认1G
--hdd-noclean:不要将写入随机ascii数据的文件unlink,则写入的文件不删除,会保留在硬盘空间。

2. Limit CPU share
· CPU resources
Processes on the host will use the CPU through the time slicing mechanism. The quantization unit of the CPU is frequency, which is the number of operations that can be performed per second. Limiting CPU resources for containers does not change the operating frequency of the CPU, but changes the CPU time slice that each container can use. Ideally, the CPU should always be in the computing state (and the amount of calculation required by the process will not exceed the processing capacity of the CPU). What is the CPU share of
docker limit CPU Share
:
docker allows users to set a number for each container, which represents the CPU share of the container. By default, the share of each container is 1024. This share is relative and does not represent any definite meaning by itself. When there are multiple containers running on the host, the proportion of CPU time occupied by each container is the proportion of its share in the total. Docker dynamically adjusts the proportion of time each container uses CPU based on the containers and processes running on the host.
Example:
If there are two containers on the host that use the CPU all the time (in order to simplify the understanding, other processes on the host are not considered), and their CPU share is 1024, then the CPU usage rate of the two containers is 50%; if one of the containers is If the share is set to 512, the CPU usage of the two is 67% and 33% respectively; if the container with the share of 1024 is deleted, the CPU usage of the remaining container will be 100%.
Advantages: It
can ensure that the CPU is in a running state as much as possible, make full use of CPU resources, and ensure the relative fairness of all containers;
Disadvantages: it is
not possible to specify a certain value for the container to use the CPU.
Set the parameters of the CPU share:
-c --cpu-shares, its value is an integer.
My machine is a 4-core CPU, so I run a stress container and use stress to start 4 processes to generate computing pressure:

# docker pull progrium/stress
# yum install htop -y
# docker run --rm -it progrium/stress --cpu 4 
stress: info: [1] dispatching hogs: 4 cpu, 0 io, 0 vm, 0 hdd
stress: dbug: [1] using backoff sleep of 12000us
stress: dbug: [1] --> hogcpu worker 4 [7] forked
stress: dbug: [1] using backoff sleep of 9000us
stress: dbug: [1] --> hogcpu worker 3 [8] forked
stress: dbug: [1] using backoff sleep of 6000us
stress: dbug: [1] --> hogcpu worker 2 [9] forked
stress: dbug: [1] using backoff sleep of 3000us
stress: dbug: [1] --> hogcpu worker 1 [10] forked

Use htop in another terminal to view resource usage:
Insert picture description here

As seen in the above figure, the four core resources of the CPU have reached 100%. The CPU usage of the four stress processes did not reach 100% because there are other machines running in the system.
For comparison, another container with a share of 512 is started:

# docker run --rm -it -c 512 progrium/stress --cpu 4 
stress: info: [1] dispatching hogs: 4 cpu, 0 io, 0 vm, 0 hdd
stress: dbug: [1] using backoff sleep of 12000us
stress: dbug: [1] --> hogcpu worker 4 [6] forked
stress: dbug: [1] using backoff sleep of 9000us
stress: dbug: [1] --> hogcpu worker 3 [7] forked
stress: dbug: [1] using backoff sleep of 6000us
stress: dbug: [1] --> hogcpu worker 2 [8] forked
stress: dbug: [1] using backoff sleep of 3000us
stress: dbug: [1] --> hogcpu worker 1 [9] forked

Because the CPU share of the container is 1024 by default, the CPU usage of these two containers should be roughly 2:1. The following is a screenshot of the monitoring after starting the second container:
Insert picture description here

Two containers have started four stress processes. The CPU usage of the stress process of the first container is about 54%, and the CPU usage of the stress process of the second container is about 25%. The ratio is roughly 2:1, which is consistent with the previous Expectations.
3. Limit the number of CPU cores that can be used by the container
-c --cpu-shares The parameter can only limit the proportion of CPU used by the container, or priority, and cannot definitively limit the specific number of CPU cores used by the container; from version 1.13, docker The --cpus parameter is provided to limit the number of CPU cores that the container can use. This function allows us to set the container CPU usage more accurately, which is a more easy to understand and therefore more commonly used method.
-Cpus is followed by a floating point number, which represents the maximum number of cores used by the container, which can be accurate to two decimal places, which means that the container can use a minimum of 0.01 core CPU.
Limit the container to only use 1.5 core CPUs:

# docker run --rm -it --cpus 1.5 progrium/stress --cpu 3 
stress: info: [1] dispatching hogs: 3 cpu, 0 io, 0 vm, 0 hdd
stress: dbug: [1] using backoff sleep of 9000us
stress: dbug: [1] --> hogcpu worker 3 [7] forked
stress: dbug: [1] using backoff sleep of 6000us
stress: dbug: [1] --> hogcpu worker 2 [8] forked
stress: dbug: [1] using backoff sleep of 3000us
stress: dbug: [1] --> hogcpu worker 1 [9] forked

Start three stresses in the container to run the CPU pressure. If there is no limit, this container will cause the CPU usage rate to be about 300% (that is, it will occupy the computing power of three cores). The actual monitoring is as follows:
Insert picture description here

It can be seen that the CPU usage of each stress process is about 50%, and the total usage is 150%, which conforms to the 1.5 core setting.
If the set --cpus value is greater than the number of CPU cores of the host, docker will report an error directly:

# docker run --rm -it --cpus 8 progrium/stress --cpu 3 
docker: Error response from daemon: Range of CPUs is from 0.01 to 4.00, as there are only 4 CPUs available.
See 'docker run --help'.

If multiple containers have set --cpus, and their sum exceeds the number of CPU cores of the host, it will not cause the container to fail or exit. These containers will compete for the use of CPU. The specific number of CPUs allocated depends on the running status of the host. And the CPU share value of the container. That is to say - cpus can only guarantee the maximum number of CPUs that the container can use when the CPU resources are sufficient, and docker cannot guarantee that the container can use so many CPUs under any circumstances (because this is simply impossible).
4. Memory resources
By default, docker does not limit the memory of the container, which means that the container can use all the memory provided by the host. This is of course a very dangerous thing. If a container runs malicious memory-consuming software, or the code has a memory leak, it is likely to cause the host's memory to run out, thus causing the service to be unavailable. In this case, docker will set the OOM (out of memory) value of the docker daemon to lower the priority of being killed when the memory is insufficient. In addition, you can set the upper limit of memory usage for each container. Once this upper limit is exceeded, the container will be killed instead of exhausting the host's memory.
Although limiting the upper memory limit can protect the host, it may also harm the services in the container. If the memory limit set for the service is too small, it will cause the service to be killed by OOM while it is still working; if it is set too large, memory will be wasted due to the scheduler algorithm. Therefore, reasonable practices include:
do memory stress test for the application, understand the memory used under normal business requirements, and then enter the production environment. You must limit the upper limit of the memory usage of the container and try to ensure that the host has sufficient resources. Once the monitoring is passed If resources are found to be insufficient, expand or migrate the container. If possible (when memory resources are sufficient), try not to use swap. The use of swap will lead to complex memory calculations and is very unfriendly to the scheduler.
Docker limits the memory usage of the container.
In the docker startup parameters, related to the memory limit includes (the value of the parameter is generally the memory size, which is a positive number, followed by the memory units b, k, m, g, corresponding to bytes, KB, MB, and GB, respectively) :

-m --memory:容器能使用的最大内存大小,最小值为 4m
--memory-swap:容器能够使用的 swap 大小
--memory-swappiness:默认情况下,主机可以把容器使用的匿名页(anonymous page)swap 出来,你可以设置一个 0-100 之间的值,代表允许 swap 出来的比例
--memory-reservation:设置一个内存使用的 soft limit(软限制),如果 docker 发现主机内存不足,会执行 OOM 操作。这个值必须小于 --memory 设置的值
--kernel-memory:容器能够使用的 kernel memory (内核内存)大小,最小值为 4m。
--oom-kill-disable:是否运行 OOM 的时候杀死容器。只有设置了 -m,才可以把这个选项设置为 false(),否则容器会耗尽主机内存,而且导致主机应用被杀死

Regarding the setting of --memory-swap: --memory-swap must be used when --memory is also configured.
If the value of --memory-swap is greater than --memory, then the total memory (memory + swap) that can be used by the container is the value of --memory-swap, and the value of swap that can be used is --memory-swap minus --memory It values
the same as if --memory-swap is 0, or a value and --memory, the container size can be used to swap the memory twice, if the corresponding value is --memory 200M, 400M can use swap container then
if - The value of -memory-swap is -1, then there is no limit to the use of swap, which means that the host can use as many swaps as the container can be used.
If the memory usage of the container is limited to 64M, the container is running normally ( If the memory on the host is very tight, this may not be guaranteed):

# docker run --rm -it -m 64m progrium/stress --vm 1 --vm-bytes 64M --vm-hang 0
WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
stress: info: [1] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: dbug: [1] using backoff sleep of 3000us
stress: dbug: [1] --> hogvm worker 1 [7] forked
stress: dbug: [7] allocating 67108864 bytes ...
stress: dbug: [7] touching bytes in strides of 4096 bytes ...
stress: dbug: [7] sleeping forever with allocated memory
.....

And if you apply for 100M memory, you will find that the process in the container is killed (worker 7 got signal 9, signal 9 is the kill signal)

# docker run --rm -it -m 64m progrium/stress --vm 1 --vm-bytes 100M --vm-hang 0 
WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
stress: info: [1] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: dbug: [1] using backoff sleep of 3000us
stress: dbug: [1] --> hogvm worker 1 [7] forked
stress: dbug: [7] allocating 104857600 bytes ...
stress: dbug: [7] touching bytes in strides of 4096 bytes ...
stress: FAIL: [1] (415) <-- worker 7 got signal 9
stress: WARN: [1] (417) now reaping child worker processes
stress: FAIL: [1] (421) kill error: No such process
stress: FAIL: [1] (451) failed run completed in 0s

5. IO resources
For disks, the parameters to be considered are capacity and read and write speed, so the disk restrictions on containers should also be based on these two dimensions. Currently docker supports limiting the read and write speed of the disk, but there is no way to limit the disk capacity that the container can use (once the disk is mounted in the container, the container can use all the disk capacity).
To limit the read and write rate of the disk, docker allows you to directly limit the read and write rate of the disk. The corresponding parameters are:
-device-read-bps: the maximum number of bytes that the disk can read per second (bytes)
-device-write-bps: per disk The maximum number of bytes that can be written in a second (bytes)
The values ​​of the above two parameters are the disk and the corresponding rate. The limit is a positive integer, and the unit can be kb, mb, and gb.
For example, you can limit the read rate of the device to 1mb:

# docker run -it --device /dev/sda:/dev/sda --device-read-bps /dev/sda:1mb ubuntu:16.04 bash 

root@6c048edef769:/# cat /sys/fs/cgroup/blkio/blkio.throttle.read_bps_device 
8:0 1048576

root@6c048edef769:/# dd iflag=direct,nonblock if=/dev/sda of=/dev/null bs=5M count=10
10+0 records in
10+0 records out
52428800 bytes (52 MB) copied, 50.0154 s, 1.0 MB/s

It took about 50s to read 50m from the disk, which shows that the disk rate limit has worked.
Two other parameters can limit the disk read and write frequency (how many read and write operations can be performed per second):
–device-read-iops: the maximum number of IO read operations that the disk can perform per second
–device-write-iops: the maximum disk per second How many IO write operations can be performed
The values ​​of the above two parameters are the disk and the corresponding IO upper limit.
For example, you can make the disk read up to 100 times per second:

# docker run -it --device /dev/sda:/dev/sda --device-read-iops /dev/sda:100 ubuntu:16.04 bash root@2e3026e9ccd2:/# dd iflag=direct,nonblock if=/dev/sda of=/dev/null bs=1k count=1000
1000+0 records in
1000+0 records out
1024000 bytes (1.0 MB) copied, 9.9159 s, 103 kB/s

It can be seen from the test that the container set the iops of the read operation to 100, and read 1m of data from the block inside the container (1k each time, a total of 1000 reads), which takes about 10s in total, which is 100 iops. /s, in line with expected results.

Two: container port forwarding

Container: 172.16.0.2 5000
client----->eth0:10.18.45.197------->172.16.0.2:5000
5000
Use port forwarding to solve the container port access problem-
p:
When creating an application container, generally Port mapping will be done so that the outside can access the applications in these containers. You can use multiple -p to specify multiple port mapping relationships.
mysql application port forwarding:

查看本地地址:
[root@xingdian ~]# ip a
    ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:0a:5b:8b brd ff:ff:ff:ff:ff:ff
    inet 192.168.245.134/24 brd 192.168.245.255 scope global dynamic ens33
       valid_lft 1444sec preferred_lft 1444sec

Run the container: use -p for port forwarding to forward the local 3307 to the 3306 of the container. For other parameters, you need to check the page prompt of the publishing container

[root@xingdian ~]# docker run --name mysql1 -p 3307:3306  -e MYSQL_ROOT_PASSWORD=123 daocloud.io/library/mysql

Access the database in the container mysql1 through the local IP: 192.168.245.134 port 3307, the following prompt appears, congratulations

[root@xingdian /]# mysql -u root -p123 -h 192.168.245.134 -P3307
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.7.18 MySQL Community Server (GPL)

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> 

Three: Deploy a private warehouse

Warehouse mirroring, Docker hub has officially provided a container mirror registry, which is used to build a private warehouse to
pull the mirror:

[root@xingdian ~]# docker pull daocloud.io/library/registry:latest

Run the container:

[root@xingdian ~]# docker run --restart=always -d -p 5000:5000 daocloud.io/library/registry 

Note: If the creation of the container is unsuccessful, a firewall error is reported, and the solution is as follows

[root@xingdian ~]# systemctl stop firewalld
[root@xingdian ~]# systemctl restart docker

View the running container:

[root@xingdian ~]# docker ps
CONTAINER ID  IMAGE  COMMAND   CREATED  STATUS    PORTS    NAMES
1f444285bed8        daocloud.io/library/registry   "/entrypoint.sh /etc/"   23 seconds ago      Up 21 seconds       0.0.0.0:5000->5000/tcp   elegant_rosalind

Connect the container to view the port status:

[root@xingdian ~]# docker exec -it  1f444285bed8  /bin/sh      //这里是sh 不是bash
/ # netstat -antpl  //查看5000端口是否开启(容器内查看)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 :::5000                 :::*                    LISTEN      1/registry
Active UNIX domain sockets (only servers)
Proto RefCnt Flags       Type       State         I-Node PID/Program name    Path

Check whether you can access the private warehouse on this machine, and see if the status code is 200

[root@xingdian registry]# curl  -I  127.0.0.1:5000
HTTP/1.1 200 OK
Cache-Control: no-cache
Date: Thu, 08 Oct 2020 05:34:32 GMT

For convenience, download a relatively small mirror, buysbox

[root@xingdian registry]# docker pull busybox

Before uploading, you must tag the image and indicate the ip and port:

[root@xingdian ~]# docker tag busybox  本机IP:端口/busybox

This is a mirror directly pulled from the official, very slow:

[root@xingdian ~]# docker tag busybox 192.168.245.136:5000/busybox

The following Mysql is the second image I tested, pulled from daocloud:

[root@xingdian ~]# docker tag daocloud.io/library/mysql 192.168.245.136:5000/daocloud.io/library/mysql

Note: You can use the image name or id after the tag. The image name I used here, if you use the official image, you don’t need to add a prefix, but you need to add a prefix for daocloud.io.
Modify the request method to http:

默认为https,不改会报以下错误:
Get https://master.up.com:5000/v1/_ping: http: server gave HTTP response to HTTPS client
[root@xingdian ~]# vim /etc/docker/daemon.json
    {
    
     "insecure-registries":["192.168.245.136:5000"] }
重启docker:
[root@xingdian ~]# systemctl restart docker

Upload the image to the private warehouse:

[root@xingdian ~]# docker push 192.168.245.136:5000/busybox
[root@xingdian ~]# docker push 192.168.245.136:5000/daocloud.io/library/mysql

View all mirrors in the private warehouse:

 [root@xingdian ~]# curl 192.168.245.130:5000/v2/_catalog
        {
    
    "repositories":["busybox"]

Four: deploy centos7 container application

Systemd integration:
Because systemd requires CAPSYSADMIN permission, it has the ability to read the host cgroup. In CentOS7, fakesystemd has been used instead of systemd to solve the dependency problem. If you still want to use systemd, you can refer to the following Dockerfile:

[root@xingdian ~]# vim Dockerfile
FROM daocloud.io/library/centos:7
MAINTAINER "xingdian"  [email protected]
ENV container docker
RUN yum -y swap -- remove fakesystemd -- install systemd systemd-libs
RUN yum -y update; yum clean all; \
(cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]

This Dockerfile deletes fakesystemd and installs systemd.
Then build the basic image:

[root@xingdian ~]# docker build --rm -t local/c7-systemd .

In order to use a container containing systemd as above, you need to create a Dockerfile similar to the following:

[root@xingdian ~]# vim Dockerfile
FROM local/c7-systemd
RUN yum -y install httpd; yum clean all; systemctl enable httpd.service
EXPOSE 80
CMD ["/usr/sbin/init"]

Build the image:

[root@xingdian ~]# docker build --rm -t local/c7-systemd-httpd .

Run an application container containing systemd:
In order to run a container containing systemd, you need to use the –privileged option and mount the host's cgroups folder. The following is an example command to run an httpd container that includes systemd:

[root@xingdian ~]# docker run --privileged -ti -v /sys/fs/cgroup:/sys/fs/cgroup:ro -p 80:80 local/c7-systemd-httpd

Note: You cannot add /bin/bash to the previous command. Adding it will cause the service to be unavailable, and some services may find the problem of insufficient permissions mentioned earlier, but if you do not add it, it will run in the foreground (no use -d), You can use ctrl+p+q to put it in the background to
test the availability:

# elinks --dump http://docker       //下面为apache默认页面
                Testing 123..
   This page is used to test the proper operation of the [1]Apache HTTP
   server after it has been installed. If you can read this page it means
   that this site is working properly. This server is powered by [2]CentOS.

Four: fixed container IP After
docker is installed, three network types will be created by default, bridge, host and none
display the current network:

[root@xingdian ~]# docker network list
NETWORK ID          NAME                DRIVER              SCOPE
90b22f633d2f        bridge              bridge              local
e0b365da7fd2        host                host                local
da7b7a090837        none                null                local

bridge: network bridging
This mode is used to start and create containers by default, so each time the docker container restarts, the corresponding ip addresses will be obtained in order, which causes the ip to change every time the container restarts
none: no specified network
startup When container, you can pass network=none, docker container will not allocate LAN ip
host: host network
docker container network will be attached to the host, the two are interoperable.
Create a fixed ip container
1, create a custom network type, and specify the network segment

[root@xingdian ~]# docker network create --subnet=192.168.0.0/16 staticnet

Through docker network ls, you can see that there is one more staticnet
in the network type 2. Use the new network type to create and start the container

[root@xingdian ~]# docker run -it --name userserver --net staticnet --ip 192.168.0.2 centos:6 /bin/bash

Through docker inspect, you can check that the container ip is 192.168.0.2, close the container and restart it, and find that the container ip has not changed

Guess you like

Origin blog.csdn.net/kakaops_qing/article/details/109144511