cgroup管理docker资源管理 --- 对memery内存相关的限制 ,对cpu占比进行限制,对写入速度进行限制,给生成的容器添加ip,授予生成的容器有能查看分区的权限

一:Docker 使用 cgroups

Docker 使用 cgroups 进行资源管理与限制,包括设备、内存、CPU、输入输出等。

资源配额「cgroups」
cgroups 实现了对资源的配额和度量。 cgroups 的使用非常简单,提供类似文件的接口,
在 /cgroup 目录下新建一个文件夹即可新建一个 group,在此文件夹中新建 task 文件,并
将 pid 写入该文件,即可实现对该进程的资源控制。具体的资源配置选项可以在该文件夹
中 新 建 子 subsystem , { 子 系 统 前 缀 }.{ 资 源 项 } 是 典 型 的 配 置 方 法 , 如memory.usageinbytes 就定义了该 group 在 subsystem memory 中的一个内存限制选项。
另外,cgroups 中的 subsystem 可以随意组合,一个 subsystem 可以在不同的 group 中,
也可以一个 group 包含多个 subsystem - 也就是说一个 subsystem。

1:(对memery内存相关的限制)

1:docker存储相关的目录
[root@server1 ~]# cd /sys/fs/cgroup/
[root@server1 cgroup]# ls
blkio    cpu,cpuacct  freezer  net_cls           perf_event
cpu      cpuset       hugetlb  net_cls,net_prio  pids
cpuacct  devices      memory   net_prio          systemd
[root@server1 cgroup]# cd memory/
[root@server1 memory]# ls
cgroup.clone_children               memory.memsw.failcnt
cgroup.event_control                memory.memsw.limit_in_bytes
cgroup.procs                        memory.memsw.max_usage_in_bytes
cgroup.sane_behavior                memory.memsw.usage_in_bytes
memory.failcnt                      memory.move_charge_at_immigrate
memory.force_empty                  memory.numa_stat
memory.kmem.failcnt                 memory.oom_control
memory.kmem.limit_in_bytes          memory.pressure_level
memory.kmem.max_usage_in_bytes      memory.soft_limit_in_bytes
memory.kmem.slabinfo                memory.stat
memory.kmem.tcp.failcnt             memory.swappiness
memory.kmem.tcp.limit_in_bytes      memory.usage_in_bytes
memory.kmem.tcp.max_usage_in_bytes  memory.use_hierarchy
memory.kmem.tcp.usage_in_bytes      notify_on_release
memory.kmem.usage_in_bytes          release_agent
memory.limit_in_bytes               tasks
memory.max_usage_in_bytes

2:创建目录x1,x1继承memery目录中的文件
[root@server1 memory]# mkdir x1
[root@server1 memory]# cd x1/
[root@server1 memory]# cd x1/
l[root@server1 x1]# ls
cgroup.clone_children               memory.memsw.failcnt
cgroup.event_control                memory.memsw.limit_in_bytes
cgroup.procs                        memory.memsw.max_usage_in_bytes
memory.failcnt                      memory.memsw.usage_in_bytes
memory.force_empty                  memory.move_charge_at_immigrate
memory.kmem.failcnt                 memory.numa_stat
memory.kmem.limit_in_bytes          memory.oom_control
memory.kmem.max_usage_in_bytes      memory.pressure_level
memory.kmem.slabinfo                memory.soft_limit_in_bytes
memory.kmem.tcp.failcnt             memory.stat
memory.kmem.tcp.limit_in_bytes      memory.swappiness
memory.kmem.tcp.max_usage_in_bytes  memory.usage_in_bytes
memory.kmem.tcp.usage_in_bytes      memory.use_hierarchy
memory.kmem.usage_in_bytes          notify_on_release
memory.limit_in_bytes               tasks
memory.max_usage_in_bytes

3:限制200M 内存,导入文件中即可
[root@server1 x1]# echo 209715200 > memory.limit_in_bytes 
[root@server1 x1]# cat memory.limit_in_bytes 
209715200

4:df查看挂载情况
[root@server1 ~]# df
Filesystem            1K-blocks    Used Available Use% Mounted on
/dev/mapper/rhel-root  17811456 5042236  12769220  29% /
devtmpfs                 497292       0    497292   0% /dev
tmpfs                    508264       0    508264   0% /dev/shm
tmpfs                    508264    6752    501512   2% /run
tmpfs                    508264       0    508264   0% /sys/fs/cgroup
/dev/sda1               1038336  142824    895512  14% /boot
tmpfs                    101656       0    101656   0% /run/user/0

5:进入 /dev/shm目录下
[root@server1 shm]# pwd
/dev/shm

6:free -m查看内存剩余情况,此时可以看到剩余的是546
[root@server1 shm]# free -m
              total        used        free      shared  buff/cache   available
Mem:            992         116         546           6         329         713


6:写入100M东西,此时可以看到还剩余444M,少了差不多100M
[root@server1 shm]# dd if=/dev/zero of=bigfile bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.0356392 s, 2.9 GB/s
[root@server1 shm]# free -m
              total        used        free      shared  buff/cache   available
Mem:            992         116         444         106         431         613

之前在memory.limit_in_bytes 限制的是200M,此时如果给文件的大小超过200M,则会动用swap分区,此时我不想让它动用swap分区,那么我们就要在x1的两个文件中都要进行限制

7:想要限制内存的使用,必须要两个文件中的限制是一样的
[root@server1 x1]# cat memory.memsw.limit_in_bytes 
9223372036854771712
[root@server1 x1]# cat memory.limit_in_bytes 
209715200

###进行限制
[root@server1 x1]# echo 209715200 >  memory.memsw.limit_in_bytes 
[root@server1 x1]# cat  memory.memsw.limit_in_bytes 
209715200

8:此时在生成300M的文件,则会报错killed
[root@server1 shm]# cat /sys/fs/cgroup/memory/x1/memory.limit_in_bytes 
209715200
[root@server1 shm]# cat /sys/fs/cgroup/memory/x1/memory.memsw.limit_in_bytes 
209715200

[root@server1 shm]# cgexec -g memory:x1 dd if=/dev/zero of=bigfile bs=1M count=300
Killed

9:也可以限制生成的容器的内存限制
[root@server1 ~]# docker run --memory 209715200 --memory-swap 209715200 -it --name vm1 ubuntu

10:查看那文件
[root@foundation60 Desktop]# ssh [email protected]
[email protected]'s password: 
Last login: Mon Mar 25 17:08:34 2019 from foundation60.ilt.exmaple.com
[root@server1 ~]# 
[root@server1 ~]# cd /sys/fs/cgroup/memory/
[root@server1 memory]# ls
cgroup.clone_children               memory.memsw.failcnt
cgroup.event_control                memory.memsw.limit_in_bytes
cgroup.procs                        memory.memsw.max_usage_in_bytes
cgroup.sane_behavior                memory.memsw.usage_in_bytes
docker                              memory.move_charge_at_immigrate
kubepods                            memory.numa_stat
memory.failcnt                      memory.oom_control
memory.force_empty                  memory.pressure_level
memory.kmem.failcnt                 memory.soft_limit_in_bytes
memory.kmem.limit_in_bytes          memory.stat
memory.kmem.max_usage_in_bytes      memory.swappiness
memory.kmem.slabinfo                memory.usage_in_bytes
memory.kmem.tcp.failcnt             memory.use_hierarchy
memory.kmem.tcp.limit_in_bytes      notify_on_release
memory.kmem.tcp.max_usage_in_bytes  release_agent
memory.kmem.tcp.usage_in_bytes      system.slice
memory.kmem.usage_in_bytes          tasks
memory.limit_in_bytes               user.slice
memory.max_usage_in_bytes           x1
[root@server1 memory]# cd docker/
[root@server1 docker]# ls
cgroup.clone_children
cgroup.event_control
cgroup.procs
d45b9583d9a2ac4228a96ad0671167382726b5e9256d5e0f8fa3ffc40ba7d031
memory.failcnt
memory.force_empty
memory.kmem.failcnt
memory.kmem.limit_in_bytes
memory.kmem.max_usage_in_bytes
memory.kmem.slabinfo
memory.kmem.tcp.failcnt
memory.kmem.tcp.limit_in_bytes
memory.kmem.tcp.max_usage_in_bytes
memory.kmem.tcp.usage_in_bytes
memory.kmem.usage_in_bytes
memory.limit_in_bytes
memory.max_usage_in_bytes
memory.memsw.failcnt
memory.memsw.limit_in_bytes
memory.memsw.max_usage_in_bytes
memory.memsw.usage_in_bytes
memory.move_charge_at_immigrate
memory.numa_stat
memory.oom_control
memory.pressure_level
memory.soft_limit_in_bytes
memory.stat
memory.swappiness
memory.usage_in_bytes
memory.use_hierarchy
notify_on_release
tasks

[root@server1 docker]# cd d45b9583d9a2ac4228a96ad0671167382726b5e9256d5e0f8fa3ffc40ba7d031/

[root@server1 d45b9583d9a2ac4228a96ad0671167382726b5e9256d5e0f8fa3ffc40ba7d031]# cat memory.limit_in_bytes 
209715200
 d45b9583d9a2ac4228a96ad0671167382726b5e9256d5e0f8fa3ffc40ba7d031]# cat memory.memsw.limit_in_bytes 
209715200

[root@server1 d45b9583d9a2ac4228a96ad0671167382726b5e9256d5e0f8fa3ffc40ba7d031]# 


(2):对cpu占比进行限制

cpu在 cgroup 中,并不能像硬件虚拟化方案一样能够定义 CPU 能力,但是能够定
义 CPU 轮转的优先级,因此具有较高 CPU 优先级的进程会更可能得到 CPU 运算。 通
过将参数写入 cpu.shares ,即可定义改 cgroup 的 CPU 优先级 - 这里是一个相对权重,而
非绝对值


 

1:关于cpu的文件
[root@foundation60 kiosk]# cd /sys/fs/cgroup/
[root@foundation60 cgroup]# ll
total 0
drwxr-xr-x 6 root root  0 3月  26 2019 blkio
lrwxrwxrwx 1 root root 11 3月  26 2019 cpu -> cpu,cpuacct
lrwxrwxrwx 1 root root 11 3月  26 2019 cpuacct -> cpu,cpuacct
drwxr-xr-x 6 root root  0 3月  26 2019 cpu,cpuacct
drwxr-xr-x 4 root root  0 3月  26 2019 cpuset
drwxr-xr-x 6 root root  0 3月  26 2019 devices
drwxr-xr-x 4 root root  0 3月  26 2019 freezer
drwxr-xr-x 3 root root  0 3月  26 2019 hugetlb
drwxr-xr-x 6 root root  0 3月  26 2019 memory
lrwxrwxrwx 1 root root 16 3月  26 2019 net_cls -> net_cls,net_prio
drwxr-xr-x 4 root root  0 3月  26 2019 net_cls,net_prio
lrwxrwxrwx 1 root root 16 3月  26 2019 net_prio -> net_cls,net_prio
drwxr-xr-x 4 root root  0 3月  26 2019 perf_event
drwxr-xr-x 3 root root  0 3月  26 2019 pids
drwxr-xr-x 6 root root  0 3月  26 2019 systemd
[root@foundation60 cgroup]# cd cpu
[root@foundation60 cpu]# ls
cgroup.clone_children  cpu.cfs_period_us  machine.slice
cgroup.event_control   cpu.cfs_quota_us   notify_on_release
cgroup.procs           cpu.rt_period_us   release_agent
cgroup.sane_behavior   cpu.rt_runtime_us  system.slice
cpuacct.stat           cpu.shares         tasks
cpuacct.usage          cpu.stat           user.slice
cpuacct.usage_percpu   docker


2:建立x1目录。依然存在一种继承的关系
[root@foundation60 cpu]# mkdir x1
[root@foundation60 cpu]# ls
cgroup.clone_children  cpuacct.usage_percpu  cpu.stat           tasks
cgroup.event_control   cpu.cfs_period_us     docker             user.slice
cgroup.procs           cpu.cfs_quota_us      machine.slice      x1
cgroup.sane_behavior   cpu.rt_period_us      notify_on_release
cpuacct.stat           cpu.rt_runtime_us     release_agent
cpuacct.usage          cpu.shares            system.slice
[root@foundation60 cpu]# cd x1/
[root@foundation60 x1]# ls
cgroup.clone_children  cpuacct.usage_percpu  cpu.shares
cgroup.event_control   cpu.cfs_period_us     cpu.stat
cgroup.procs           cpu.cfs_quota_us      notify_on_release
cpuacct.stat           cpu.rt_period_us      tasks
cpuacct.usage          cpu.rt_runtime_us



3:查看相关文件参数的意思
[root@foundation60 x1]# cat cpu.cfs_period_us 
100000
[root@foundation60 x1]# cat cpu.cfs_quota_us 
-1

首先进入cpu子系统对应的层级路径下:cd /sys/fs/cgroup/cpu
通过新建文件夹创建一个cpu控制族群:mkdir cg1,即新建了一个cpu控制族群:cg1 
新建cg1之后,可以看到目录下自动建立了相关的文件,这些文件是伪文件。我们的测试示例主要用到cpu.cfs_period_us和cpu.cfs_quota_us两个文件。
cpu.cfs_period_us:cpu分配的周期(微秒),默认为100000。
cpu.cfs_quota_us:表示该control group限制占用的时间(微秒),默认为-1,表示不限制。如果设为50000,表示占用50000/100000=50%的CPU。
这里,我们设置占用30%的CPU,即把cpu.cfs_quota_us设置为30000。

4:设置占有20%的cpu
[root@foundation60 x1]# echo 20000 > cpu.cfs_quota_us 
[root@foundation60 x1]# cat cpu.cfs_quota_us 
20000
[root@foundation60 x1]# dd if=/dev/zero of=/dev/null &
[1] 6997

使用top命令查看dd占有cpu的比例,发现达到了100%
 PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND  
 6997 root      20   0  107940    608    516 R 100.0  0.0   1:52.52 dd  

这样肯定是不行的。我们要对其进行限制
5:将dd进程放到tasks中
[root@foundation60 x1]# cat tasks 
[root@foundation60 x1]# echo 6997 > tasks 
此时在查看cpu占有,发现为我们之前设置的20%左右
PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND            
 6997 root      20   0  107940    608    516 R  19.6  0.0   5:05.99 dd 

6:也可以在生成虚拟机时进行限制,运行dd进程查看cpu占比
[root@foundation60 x1]# docker run -it --name vm1 --cpu-quota=20000 ubuntu
root@a7fd927c80bb:/# dd if=/dev/zero of=/dev/null   

7:top命令进行查看,查看占有20%左右
 PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                       
 7464 root      20   0    4368    356    280 R  20.0  0.0   0:01.83 dd  
8:ctrl+c退出

9:在生成容器时不进行cpu限制
[root@foundation60 ~]# docker run -it --name vm1 ubuntu
root@37b63e80d4da:/# dd if=/dev/zero of=/dev/null
此时会发现dd进程的cpu占比会达到100%
PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 7879 root      20   0    4368    360    280 R 100.0  0.0   0:13.29 dd 

10:删除vm1,
生成限制cpu的容器,
[root@foundation60 ~]# docker run -it --name vm1 --cpu-quota=20000 ubuntu
root@16ed85e5c5c1:/# 
此时在/sys/fs/cgroup/cpu/docker/找到对应的容器的文件
[root@foundation60 docker]# cd 16ed85e5c5c1ef0a1ecd9ef4d21f29516048ba66a0f53d5fe8041537f0310ed9/
[root@foundation60 16ed85e5c5c1ef0a1ecd9ef4d21f29516048ba66a0f53d5fe8041537f0310ed9]# cat cpu.cfs_quota_us 
20000

发现正是之前设置的20000,此时就能总结出来一个规律,之前建立的x1文件作出限制和在生成容器时作出限制时一样的,都是会写到相关的文件中去

(3):对写入速度进行限制

1:查看分区
[root@foundation60 ~]# cat /proc/partitions 

2:限制成大概30M每秒
[root@foundation60 ~]# docker run -it --device-write-bps /dev/sda:30M ubuntu

3:发现写入300M内容大概需要10s
root@47c49cae0c2e:/# dd if=/dev/zero of=file bs=1M count=300 oflag=direct  ##oflag=direct表示IO直连
300+0 records in
300+0 records out
314572800 bytes (315 MB) copied, 10.1634 s, 31.0 MB/s

 

二:给生成的容器添加ip

不进行任何授权时发现是无法添加ip的
[root@foundation60 ~]# docker run -it --rm ubuntu
root@c193870bc637:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
26: eth0@if27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
root@c193870bc637:/#  ip addr add 172.17.0.4/24 dev eth0
RTNETLINK answers: Operation not permitted

给予相关的权限
[root@foundation60 ~]# docker run -it --rm --cap-add=NET_ADMIN ubuntu
root@df652c4c73f2:/# ip addr 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
28: eth0@if29: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
root@df652c4c73f2:/# ip addr add 172.17.0.4/24 dev eth0   ###添加ip发现成功了
root@df652c4c73f2:/# ip addr   ##可以查看到添加的ip
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
28: eth0@if29: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.17.0.4/24 scope global eth0
       valid_lft forever preferred_lft forever

 

三:授予生成的容器有能查看分区的权限

[root@foundation60 kiosk]# docker run -it --rm --privileged=true ubuntu  ##和物理机是共用一个分区的
root@7a5bb4c39d62:/# fdisk -l

WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x8974ef3e

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1   976773167   488386583+  ee  GPT
Partition 1 does not start on physical sector boundary.

WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sdb: 128.0 GB, 128035676160 bytes
255 heads, 63 sectors/track, 15566 cylinders, total 250069680 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x205ceaff

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1  4294967295  2147483647+  ee  GPT

Disk /dev/sdc: 30.9 GB, 30943995904 bytes
32 heads, 63 sectors/track, 29978 cylinders, total 60437492 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x365f170e

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1   *          63    60435647    30217792+   b  W95 FAT32
root@7a5bb4c39d62:/# 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

猜你喜欢

转载自blog.csdn.net/yinzhen_boke_0321/article/details/88874010