Docker - Cgroup resource constraints

A, Cgroup

(1) Docker controlled by Cgroup resource quota containers, including CPU, memory, disk three main areas, covering common resource quotas and usage control.

(2) Cgroup is a Linux kernel can be restricted, record, process isolation mechanism of physical resources used by the group.

Cgroup subsystems:

1, blkio: set the limits of each block input and output control apparatus;
2, cpu: using cpu scheduler provides access to cgroup task;
. 3, cpuacct: generating reports cpu resources cgroup task;
. 4, cpuset: if a multi-core the cpu, this subsystem is assigned a separate cgroup cpu and memory tasks;
. 5, devices: cgroup task to allow or deny access to the device;
. 6, Freezer: pause and resume cgroup tasks;
. 7, memory: setting each of cgroup memory limits memory resources and generating reports;
. 8, net_cls: marking each network packet for convenience cgroup;
. 9, NS: namespace subsystem;
10, perf_event: increased tracking capabilities for monitoring each cgroup can be monitored belongs All threads and run a particular cgroup thread on a particular CPU.

Second, the use of stress testing tool CPU and memory

First use Dockerfile based Centos to create a mirror image of the stress of the tool:

[root@localhost ~]# mkdir /opt/stress
[root@localhost ~]# cd /opt/stress/

[root@localhost stress]# vim Dockerfile
FROM centos:7
RUN yum install -y wget
RUN wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
RUN yum install -y stress

[root@localhost stress]# docker build -t centos:stress .    //创建镜像

(1) using the following command to create a container, --cpu-shares argument values ​​are not guaranteed to be obtained vcpu 1 GHz or how much CPU resources, and it is only an elastic weighted value.

[root@localhost stress]# docker run -itd --cpu-shares 100 centos:stress
08a203033c051098fd6294cd8ba4e2fa8baa18cefb793c6c4cd655c0f28cabc0

Note: By default, CPU share of each container are Docker 1024, a single share of the container is meaningless. Only when running multiple containers at the same time, the effect of CPU weighted container can be reflected.
For example, two containers A, B share the CPU 1000 and 500, respectively, the time slice allocated in CPU time, chance A container than the container B to obtain twice as many CPU time slices. But the result depends on the time allocation of operational status of the host and other containers, in fact, can not guarantee container A will get CPU time slice. For example, the process has been idle container A, then B can get more container than the container A slice of CPU time. In extreme cases, such as running on a host only one container, even if it's only 50 CPU shares, it can also be a whole host of exclusive CPU resources.

For example : Cgroup only when the container resource allocation of scarce resources, that is, when the need to restrict the use of containers, to take effect. Therefore, the CPU can not be based solely on the share of a container to determine how many CPU resources allocated to it, the result depends on the CPU resource allocation and distribution of container vessels other concurrently running processes in operation. The container can be provided by using the priority cpu share the CPU.

For example, two containers and started running to see the percentage of CPU usage :
1,

//容器产生10个子函数进程:
[root@localhost stress]# docker run -itd --name cpu512 --cpu-shares 512 centos:stress stress -c 10
99086cce962308fdb5417df189571e39f375ab2c067887cbac48e773225f25c7

//进入容器再使用top命令查看cpu使用情况:
[root@localhost stress]# docker exec -it 99086cce9623 bash
[root@99086cce9623 /]# top
.. ..
.. ..
.. ..
.. ..
按 q 退出,
[root@99086cce9623 /]# exit        //退出整个容器

Docker - Cgroup resource constraints
2. At this point, we can then open another container for comparison:

[root@localhost stress]# docker run -itd --name cpu1024 --cpu-shares 1024 centos:stress stress -c 10
81e29988fce779c6b3e10fb8570ae2358db4090e1987202bb7919260287eca66

[root@localhost stress]# docker exec -it 81e29988fce7 bash
[root@81e29988fce7 /]# top
..
..
..

Docker - Cgroup resource constraints
By entering the container was observed% CPU two containers, may be found in the ratio 1: 2

Three, CPU cycles, limit:

Providing Docker --cpu-period, -cpu-quota two containers can be assigned to the control parameters CPU clock cycles.

–cpu-period :是用来指定容器对 CPU 的使用要在多长时间内做一次重新分配。
–cpu-quota :是用来指定在这个周期内,最多可以有多少时间用来跑这个容器。
与 --cpu-shares 不同的是。这种配置是指定一个绝对值,容器对 CPU 资源的使用绝对不会超过配置的值。
注意:
cpu-period 和 cpu-quota 的单位是微秒;
cpu-period 的最小值是1000微秒,最大值为1秒,默认值为0.1秒。
cpu-quota 的值默认是 -1 ,表示不做控制;
cpu-period 和 cpu-quota 参数一般联合使用。

例如:
容器进程需要每一秒使用单个 CPU 的0.2秒时间,可以将 cpu-period 设置为 1000000(即1秒),cpu-quota 设置为 200000(0.2秒),当然,在多核情况下,如果允许容器进程完全占有两个 CPU,则可以将 cpu-period 设置为 100000(即0.1秒),cpu-quota 设置为 200000(0.2秒)。

[root@localhost stress]# docker run -itd --cpu-period 100000 --cpu-quota 200000 centos:stress
3f2a577cf6a281347338cbf9734440b3b8a29e771dc4890a9f243eb0773f6c09

[root@localhost stress]# docker exec -it 3f2a577cf6a2 bash

[root@3f2a577cf6a2 /]# cat /sys/fs/cgroup/cpu/cpu.cfs_period_us 
100000
[root@3f2a577cf6a2 /]# cat /sys/fs/cgroup/cpu/cpu.cfs_quota_us  
200000

Docker - Cgroup resource constraints

四、CPU Core 控制:

对于多核 CPU 的服务器,Docker 还可以控制容器运行使用哪些 CPU 内核,即使用 --cpuset-cpus 参数。这对具有多 CPU 的服务器尤其有用,可以对需要高性能计算的容器进行性能最优配置。

[root@localhost ~]# docker run -itd --name cpu02 --cpuset-cpus=0-2 centos:stress
76994f5d310de48ee635f69270f7c9b4cba1e65aad935ff1e0d6e098441104eb
//执行该命令(需要宿主机为四核),表示创建的容器只能使用0、1、2 三个内核。 

[root@localhost ~]# docker exec -it 76994f5d310d bash    //进入容器
[root@76994f5d310d /]# cat /sys/fs/cgroup/cpuset/cpuset.cpus
0-2

Docker - Cgroup resource constraints
通过下面指令可以看到容器中进程与 CPU 内核的绑定关系,达到 CPU 内核的目的:

[root@localhost ~]# docker exec 76994f5d310d taskset -c -p 1
pid 1's current affinity list: 0-2
//容器内部第一个进程号 pid为1,被绑定到指定CPU上运行。

Docker - Cgroup resource constraints

五、CPU 配额控制参数的混合使用:

通过 cpuset-cpus 参数指定容器 A 使用 CPU 内核 0,容器B 只是用 CPU 内核1;在主机上只有这两个容器使用对应 CPU 内核的情况,它们各自占有全部的内核资源,cpu-shares 没有明显效果。cpuset-cpus、cpuset-mems 参数只在多核、多内存节点上的服务器上有效,并且必须与实际的物理配置匹配,否则也无法达到资源控制的目的。在系统具有多个 CPU 内核的情况下,需要通过 cpuset-cpus 参数为设置容器 CPU 内核才能方便地进行测试。

[root@localhost ~]# docker run -itd --name cpu3 --cpuset-cpus 1 --cpu-shares 512 centos:stress stress -c 1
d6e122af832297a05b6993ea3146a2a62969557989933ac9f1bf59f2a1de5c50

[root@localhost ~]# docker exec -it d6e122af8322 bash
[root@d6e122af8322 /]# top  //top查看后,按1可以看到每个核心的占用情况

Docker - Cgroup resource constraints
我们再创建一个容器:

[root@localhost ~]# docker run -itd --name cpu4 --cpuset-cpus 3 --cpu-shares 1024 centos:stress stress -c 1
d375a1ba761a711d55a01d95c7a5d494e62f86d447d36422be666cacf6483ca1

[root@localhost ~]# docker exec -it d375a1ba761a bash
[root@d375a1ba761a /]# top

Docker - Cgroup resource constraints

六、内存限额:

与操作系统类似,容器可使用的内存包括两部分:物理内存 和 Swap;

docker 通过下面两组参数来控制容器内存的使用量:

-m 或 --memory:设置内存的使用限额,例如 100M、1024M;
–memory-swap:设置内存 +swap 的使用限额。
例如:执行如下命令允许该容器最多使用 200M的内存,300M 的swap:

[root@localhost ~]# docker run -it -m 200M --memory-swap=300M progrium/stress --vm 1 --vm-bytes 280M
// --vm 1:启动1个内存工作线程;
   --vm-bytes 280M :每个线程分配280M内存;

如果让工作线程分配的内存超过 300M,分配的内存超过限额,stress线程报错,容器退出:
[root@localhost ~]# docker run -it -m 200M --memory-swap=300M progrium/stress --vm 1 --vm-bytes 310M

Docker - Cgroup resource constraints

七:Block IO 的限制:

默认情况下,所有容器能平等地读写磁盘,可以通过设置 --blkio-weight 参数来改变容器 block IO 的优先级。

–blkio-weight 与 --cpu-shares 类似,设置的是相对权重值,默认为500
在下面的例子中,容器 A 读写磁盘的带宽是容器 B 的两倍:

[root@localhost ~]# docker run -it --name container_A --blkio-weight 600 centos:stress
[root@0f9b8d716206 /]# cat /sys/fs/cgroup/blkio/blkio.weight

[root@localhost ~]# docker run -it --name container_B --blkio-weight 300 centos:stress
[root@55bdce1cab5d /]# cat /sys/fs/cgroup/blkio/blkio.weight

Docker - Cgroup resource constraints

八、bps 和 iops 的限制:

(1)bps :是 byte per second,每秒读写的数据量;

(2)iops :是 io per second,每秒 IO 的次数;

(3)可以通过以下的参数来控制 bps 和 iops:

-Device-read-bps: restrict reading of a device bps;
Device-the Write-bps: restrict write a device bps;
Device-the Read-IOPS: a device to limit read IOPS;
Device-the Write-IOPS: limit iops write a device.
E.g:

Write rate limit container / dev / sda disk to 5MB / s:

[root@localhost ~]# docker run -it --device-write-bps /dev/sda:5MB centos:stress 

Docker - Cgroup resource constraints

Guess you like

Origin blog.51cto.com/14475593/2468078