Docker Resource Management

A host can put multiple containers, by default, Dockerthere is no limit to the hardware resources of container, when the container loads too high consumes host resources as much as possible, so sometimes we need to use the settings on container resources a ceiling, today we'll look at how to manage the use of resources Docker. Really can control only the memory and CPU

View host resource usage

Docker use cgroupsclassification processes running in the container, which allows us to manage a set of resources used by the process. Run systemd-cglscommand you can view the cgroupstree:

├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 22
├─docker
│ ├─93d5ecbdf58ff840f737c41adff9d0f9506aac036a1fd2f0c3f31edf696ce7f1
│ │ ├─24291 mysqld
│ │ ├─29404 bash
│ │ └─29480 mysql -uroot -px xxxx
│ ├─b1f53779101af8778ba8e8dfd0946a84e8c69b3d8b0b35cd8753c46d966ffba5
│ │ ├─23893 mysqld
│ │ ├─25112 bash
│ │ └─28740 mysql -uroot -px xxxx
│ └─57aff029f6c65c6bbe0edcb3f9b61b4c3a6253bd1093d2dfc8ec318dd2cc4b9b
│   ├─23667 mysqld
│   ├─24709 bash
│   └─29649 mysql -uroot -px xxxx
......

Use systemd-cgtopcommand to see processes using the most resources.

CPU

By default, each container can use all CPU resources on the host, but most systems use resource scheduling algorithm is CFS(completely fair scheduler), it is fair scheduling each worker process. Process points CPU密集型and IO密集型categories. The system kernel will process real-time monitoring system, when a process CPU resources for too long, the kernel will adjust the priority of the process.

parameter

parameter name effect
--cpu-share When the cpu resources available to a group of containers, the container in the group scale use of cpu resources, when the container is in an idle state, cpu resource is a large load container occupied (prorated by compression), when idle for running, cpu resources will be allocated to other containers
--cpus= value Specifies the number of cpu core
--cpuset-cpus Specified container which can only run on a cpu core (binding cpu); core using 0,1,2,3 number;
–cpu-share Randomly assigned cpu

Example:

docker run -di --name=os --cpus=2 centos:latest bash

Setting Memory

By default, docker and no restrictions on the container memory , which means that containers can use all the memory provided by the host. This of course is a very dangerous thing, if a container is running malicious software, memory consumption, there is a memory leak, or code, is likely to lead host memory is exhausted, thus resulting in service is unavailable. In this case, docker docker daemon will set the OOM (out of memory) value, making it a priority to reduce the memory of the time were killed. In addition, you can set the upper limit of the memory is used for each container, once exceeded this limit, the vessel will be killed, rather than deplete the host's memory.

Although the upper limit memory limit to protect the host, but could also hurt the container service. If memory limit set for the service is too small, will lead to still work normally when the service was killed OOM; if set too large, because of the waste of memory scheduler algorithm. Therefore, a reasonable approach include:

  • Memory for the application to do stress tests, to understand memory when used under normal business requirements before it can go into production environments
  • Be sure to limit memory limit use of the container
  • Try to ensure that adequate resources to host, once found by monitoring the lack of resources, it is for expansion or migration of the container
  • If (sufficient memory resources), try not to use swap, the swap will lead to memory computational complexity of the scheduler very unfriendly

docker container memory usage limit

In docker boot parameters, and limitations associated memory comprising (parameter memory size is generally, is a positive number, followed memory units b, , k, m, grespectively bytes, KB, MB, and GB):

parameter effect
-m or --memory The maximum memory size of the container can be used, for the minimum 4m
--memory-swap size of the container can be used to swap
--memory-swappiness By default, the host may use anonymous containers page (anonymous page) swap out, you can set a value between 0 and 100 for the allowed out of proportion swap
--memory-reservation Setting a soft limit memory usage, if found insufficient docker host memory, OOM will perform the operation. This value must be less than the --memoryvalue of
--kernel-memory kernel memory size of the container can be used, for the minimum 4m.
--oom-kill-disable Time is running OOM kill container. Only set up -m, you can put this option is set to false, otherwise the container will run out of host memory, and causing the host application is killed

About --memory-swapsetting must explain, --memory-swapmust be --memoryin order to be useful in the case of the configuration.

  • If the --memory-swapvalue is greater than --memory, the total memory (RAM + swap) containers which can be used for the --memory-swapvalue, swap value can be used --memory-swapby subtracting --memorythe value of
  • If --memory-swapis 0, or with --memorythe same value, the container can be used to double the size of the memory swap, if --memorythe corresponding value 200M, the container may be used 400Mswap
  • If --memory-swapthe value is -1, then it does not restrict the use of swap, which means that the host how many swap, the container can be used

Example:

docker run -di --name=os -m=1g centos:latest bash

test

1) Installation Docker containers

[root@VM_0_15_centos ~]# docker pull centos:latest

2) Run the container and CPU and memory designated

[root@VM_0_15_centos ~]# docker run -di --name=os --cpus=0.3 -m=512MB  centos:latest bash

3) into the pressure vessel and the installation test tool

[root@VM_0_15_centos ~]# docker exec -it os bash

[root@7a1de35f8a52 /]# yum install wget gcc gcc-c++ make -y
[root@7a1de35f8a52 /]# yum install -y epel-release
[root@7a1de35f8a52 /]# yum install stress -y     

4) before the pressure measured resource usage observed Docker

docker stats

5) Pressure measuring CPU

stress --cpu 2 --timeout 600

CPU 2 processes increases, the processing sqrt()function is a function to improve system CPU load test 600S

6) observed Docker resource usage

docker stats

CPU can see the container can not be substantially more than 30% to the ment.

7) pressure memory test

[root@ef431fea0e9e /]# stress --vm 1 --vm-bytes 1g --timeout 600

Add an IO processes, memory size 1G, the discovery process directly kill off

stress: info: [162] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: FAIL: [162] (415) <-- worker 163 got signal 9
stress: WARN: [162] (417) now reaping child worker processes
stress: FAIL: [162] (451) failed run completed in 1s

Guess you like

Origin www.cnblogs.com/markLogZhu/p/11446003.html