Docker resource constraints

By default, a resource is not restricted container, and the container can be used all the resources of the kernel scheduler. Docke provide some parameters at startup of the container vessel to use memory, CPU and block IO.

Really you can control only the memory and the CPU. .

RAM

Memory is incompressible resources

OOME

In Linxu system, if the kernel detected the host does not have enough memory to invoke some of the important functions performed by the system when it throws a OOME (Out Of Memory Exception: Memory exception) to kill certain processes, in order to to free memory.
Once OOME occur, they are likely to be killed by any process, including docker daemon included. To this end, Docker specially adjusted docker daemon of OOM priority to prevent them from being killed, but the priority of the container has not been adjusted. When out of memory, the kernel according to their own scheduling algorithm, a score obtained for each process, and then kill the highest score of the process to free memory.
You can specify --oom-score-adj parameters (default 0) when the docker run. This is a container killed priority, will affect the score, the greater the value is more likely to be killed. This parameter only affect the final score, the priority is to look at the process of killing scores of small parameters may be calculated after the score remained the highest priority will still be killed.
You can specify -oom-kill-disable = true parameter specifies certain important OOM container prohibited from being killed.

--oom-kill-disable               Disable OOM Killer
--oom-score-adj int              Tune host's OOM preferences (-1000 to 1000)

Memory Limit

Options description
-m,--memory Memory limit, plus a digital format units, which may b, k, m, g. Minimum 4M
--memory-swap The total memory + swap partition size limit. Format above. You must be greater than the set -m
--memory-swappiness By default, the host may use anonymous containers page (anonymous page) swap out, you can set a value between 0 and 100 for the allowed out of proportion swap
--memory-reservation Setting a soft limit memory usage, if found insufficient docker host memory, OOM will perform the operation. This value must be less than the set value --memory
--kernel-memory kernel memory size of the container can be used, for the minimum 4m
--oom-kill-disable Time is running OOM kill container. Only set -m, you can put this option is set to false, otherwise the container will run out of host memory, and causing the host application is killed

--memory-swap parameters
of this parameter in conjunction with -m to take effect, described in the table is relatively simple, general usage.
Typical usage: -m than large, limiting the total memory partition size + exchange
disable swap: -m and as large, and this limits --memory --memory-swap is as great as the available resources swap is 0, corresponding to disable .
Default: set to 0 or not, if the host (Docker Host) enabled the swap, the swap container available memory limit of two times.
Unlimited: set to -1, if the host (Docker Host) enabled swap, then the container can be used for all the host of the swap resources.

Use the free command in the container, see swap space does not reflect these restrictions above, there is no reference value.

CPU

CPU is compressible resources.

By default, each receptacle may use all the resources on the host cpu. Most systems use resource scheduling algorithm is CFS (Completely Fair Scheduler), it is fair scheduling each worker process. The process can be divided into categories 2: CPU-intensive (low priority) and IO intensive (high priority). Real-time monitoring system kernel process, when the process takes a long time when cpu resources, the kernel will adjust the priority of the process.
docker after 1.13 also supports realtime scheduling.

There are three kinds of CPU resource allocation strategy is as follows:

  1. Distribution ratio by compression
  2. Limited to a maximum of a few nuclear
  3. Only limited use which carrier or nuclear
Options description
-c, --cpu-shares int When the cpu resources available to a group of containers, the container in the group scale use of cpu resources, when the container is in an idle state, cpu resource is a large load container occupied (prorated by compression), when idle for running, cpu resources will be allocated to other containers
--cpus decimal Cpu specified number of cores, which directly defines a container cpu resources available
--cpuset-cpus string Specified container which can only run on a cpu core (binding cpu); core number using 0,1,2,3

CPU Share

docker set CPU share of the container parameters -c, --cpu-shares, whose value is an integer.

docker each container allows the user to set a number representing the container CPU share, share each container 1024 is default. When running on a host a plurality of containers, each container CPU proportion of time occupied for its share in the total proportion. For example, if there are two containers have been used on the host CPU (in order to simplify the understanding, without regard to other processes on the host), which are Share CPU 1024, the two containers are 50% CPU usage; wherein if the share a set of container 512, then the CPU utilization of both 67% and 33%, respectively; deleted if the share of container 1024, the container remaining CPU usage would be 100%.

To conclude, in this case, docker dynamically adjust the ratio of CPU time per container according to running on the host vessel and processes. The advantage is able to ensure that the CPU is running as much as possible, make full use of CPU resources, but also to ensure a relatively fair all the container; drawback is unable to determine the value of the specified container using the CPU.

The number of CPU cores

After the 1.13 version, docker parameters provided --cpus number of CPU cores may define the container can be used. This feature allows us to more accurately set the container CPU usage, an easier to understand and therefore more commonly used means.

--Cpus later followed by a float, the most representative of the number of containers used in nuclear, can be accurate to two decimal places, i.e. the container can be a minimum 0.01 core CPU. For example, we can limit the number of containers can only use 1.5 core CPU.

If --cpus value is larger than the number of the host CPU core, docker error directly.
If multiple containers are set --cpus, and they are more than the sum of the number of host CPU core, and will not lead to container failure or exit, will compete for CPU between these containers, the number depending on the host CPU specific allocation of operation CPU share values and the container. That --cpus can only guarantee the number of CPU under sufficient CPU resources can be used most of the container, docker does not guarantee that the container can be used so much CPU in any case (because it simply is not possible).

Designated CPU core

Docker allows scheduling when run on which container defines a CPU. Let container by --cpuset-cpus parameter is only running on one or a few nuclear.

--cpuset-cpus, -cpus parameters can -c, using --cpu-shares with restricted container can only run on some CPU core, and a usage configuration.

Restrictions container run on which cores is not a good practice, because it needs to know in advance how many CPU cores on host, and very inflexible. Unless there are special needs, this is generally not recommended for use in production.

Other CPU parameters

Options description
--cpu-period int CFS schedule specified period, usually used with --cpu-quota. By default, the period is 1 second, expressed in microseconds, typically using default values. 1.13 or later is recommended --cpus flag instead.
--cpu-quota int cpu time quota container in a CFS scheduling period, i.e., each container --cpu-period period available cpu time (microseconds), cpu-quota / cpu-period. 1.13 or later is recommended --cpus flag instead.

pressure test

Resource constraints demo

Resources on the host queries

Here with the lscpu and free command:

[root@Docker ~]# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                1
On-line CPU(s) list:   0
Thread(s) per core:    1
Core(s) per socket:    1
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 60
Model name:            Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz
Stepping:              3
CPU MHz:               3999.996
BogoMIPS:              7999.99
Hypervisor vendor:     Microsoft
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              8192K
NUMA node0 CPU(s):     0
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology eagerfpu pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm ssbd ibrs ibpb stibp fsgsbase bmi1 avx2 smep bmi2 erms invpcid xsaveopt spec_ctrl intel_stibp flush_l1d
[root@Docker ~]# free -h
              total        used        free      shared  buff/cache   available
Mem:           936M        260M        340M        6.7M        334M        592M
Swap:          1.6G          0B        1.6G
[root@Docker ~]# 

Download image

Stress can be searched (detected pressure) in the docker hub.
Download images, view help run:

[root@Docker ~]# docker pull lorel/docker-stress-ng
[root@Docker ~]# docker run -it --rm lorel/docker-stress-ng
stress-ng, version 0.03.11

Usage: stress-ng [OPTION [ARG]]
 --h,  --help             show help
......省略......
Example: stress-ng --cpu 8 --io 4 --vm 2 --vm-bytes 128M --fork 4 --timeout 10s

Note: Sizes can be suffixed with B,K,M,G and times with s,m,h,d,y
[root@Docker ~]# 

The main command parameters:

  • --h, --help: This is the default startup command parameter container
  • -c N, --cpu N: N sub-start the process of the CPU pressure measurement
  • -m N, --vm N: N to start a process of memory for pressure measurement
  • --vm-bytes N: each child process uses how much memory (default 256MB)

Test memory limit

View lorel / docker-stress-ng in memory-related parameters:

 -m N, --vm N             start N workers spinning on anonymous mmap
       --vm-bytes N       allocate N bytes per vm worker (default 256MB)

Each worker is the default 256MB of memory, this keep the default. And then specify --vm, opened two worker, and limit container memory can only use 256MB, start the container:

[root@Docker ~]# docker run --name stress1 -it --rm -m 256m lorel/docker-stress-ng --vm 2
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 2 vm

The terminal has been occupied, the other from a terminal using the docker top command to view processes that are running inside the container:

[root@Docker ~]# docker top stress1
UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
root                5922                5907                0                   21:06               pts/0               00:00:00            /usr/bin/stress-ng --vm 2
root                6044                5922                0                   21:06               pts/0               00:00:00            /usr/bin/stress-ng --vm 2
root                6045                5922                0                   21:06               pts/0               00:00:00            /usr/bin/stress-ng --vm 2
root                6086                6044                13                  21:06               pts/0               00:00:00            /usr/bin/stress-ng --vm 2
root                6097                6045                47                  21:06               pts/0               00:00:00            /usr/bin/stress-ng --vm 2
[root@Docker ~]# 

Here you can look at the PID and PPID, where a total of five process, a parent process creates two sub-processes, which are each two sub-processes and created a process.

You also can use the command docker circumstances stats in real-time view of the container resource use:

$ docker stats
CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT   MEM %               NET I/O             BLOCK I/O           PIDS
626f38c4a4ad        stress1             18.23%              256MiB / 256MiB     100.00%             656B / 0B           17.7MB / 9.42GB     5

This is the real-time refresh.

Test CPU limit

The maximum limit of the container 2 using only core CPU 8 then simultaneously open for pressure measurement, using the following command:

docker run -it --rm --cpus 2 lorel/docker-stress-ng --cpu 8

0.5 nuclear restricted only open for pressure measurement CPU 4:

[root@Docker ~]# docker run --name stress2 -it --rm --cpus 0.5 lorel/docker-stress-ng --cpu 4
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 4 cpu

The other from a terminal using the docker top command to view processes that are running inside the container:

[root@Docker ~]# docker top stress2
UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
root                7198                7184                0                   22:35               pts/0               00:00:00            /usr/bin/stress-ng --cpu 4
root                7230                7198                12                  22:35               pts/0               00:00:02            /usr/bin/stress-ng --cpu 4
root                7231                7198                12                  22:35               pts/0               00:00:02            /usr/bin/stress-ng --cpu 4
root                7232                7198                12                  22:35               pts/0               00:00:02            /usr/bin/stress-ng --cpu 4
root                7233                7198                12                  22:35               pts/0               00:00:02            /usr/bin/stress-ng --cpu 4
[root@Docker ~]# 

A parent process, created four sub-processes.
Then use docker stats command to view resource consumption:

$ docker stats
CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
14a341dd23d1        stress2             50.02%              13.75MiB / 908.2MiB   1.51%               656B / 0B           0B / 0B             5

Because the 0.5 limit nuclear, so basically no more than 50%.

Test CPU Share

Open three containers, each different --cpu-shares specified parameter is not specified, the default is 1024:

[root@Docker ~]# docker run --name stress3.1 -itd --rm --cpu-shares 512 lorel/docker-stress-ng --cpu 4
800d756f76ca4cf20af9fa726349f25e29bc57028e3a1cb738906a68a87dcec4
[root@Docker ~]# docker run --name stress3.2 -itd --rm lorel/docker-stress-ng --cpu 4
4b88007191812b239592373f7de837c25f795877d314ae57943b5410074c6049
[root@Docker ~]# docker run --name stress3.3 -itd --rm --cpu-shares 2048 lorel/docker-stress-ng --cpu 4
8f103395b6ac93d337594fdd1db289b6462e01c3a208dcd3788332458ec03b98
[root@Docker ~]#

View CPU utilization three containers:

$ docker stats
CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
800d756f76ca        stress3.1           14.18%              14.53MiB / 908.2MiB   1.60%               656B / 0B           0B / 0B             5
4b8800719181        stress3.2           28.60%              15.78MiB / 908.2MiB   1.74%               656B / 0B           0B / 0B             5
8f103395b6ac        stress3.3           56.84%              15.38MiB / 908.2MiB   1.69%               656B / 0B           0B / 0B             5

Basic usage is 1/2/4, in line with expectations.

Guess you like

Origin blog.51cto.com/steed/2426545