Cgroup resource configuration

Table of contents

1. Definition of Cgroup

2. Use the stress test tool to test the cpu and memory status

1. Create a dockerfile

2. Create a mirror image

3. Create a container

① Create a container

②. Create a container and generate 10 sub-function processes

3. CPU cycle

1. Implementation plan

2. Experiment

4. CPU Core Control

5. docker build 


1. Definition of Cgroup

Cgroup is a mechanism provided by the Linux system kernel to limit, record, and isolate the physical resources used by process groups, which can realize group management of processes.

Docker uses cgroup to control the resource quota used by the container, including three aspects of cpu, memory and disk

2. Use the stress test tool to test the cpu and memory status

1. Create a dockerfile

[root@zwb_docker stress_docker]# vim dockerfile 

FROM centos:7
RUN yum install -y wget
RUN wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
RUN yum -y install stress:

 2. Create a mirror image

docker build -t centos:stress .

 

3. Create a container

① Create a container

Option: The --cpu-shares parameter value does not guarantee that one vcpu or how many GHz cpu resources can be obtained, it is only a weighted value of elasticity.

[root@zwb_docker data]# docker run -itd --cpu-shares 100 centos:stress
cde58dbbde820db7533a2f095f03b39d5b89255c6e789bfa23344c22d32b865a

It will only take effect when the resources allocated by the container are in short supply, that is, when the resources used by the container need to be limited. Therefore, it is impossible to determine how many cpu resources are allocated to a container simply based on the cpu share of a container.

②. Create a container and generate 10 sub-function processes

Container 1: Create a container named cpu512, and set the weight of the cpu to 512

[root@zwb_docker data]# docker run -itd --name cpu512 --cpu-shares 512 centos:stress stress -c 10

--cpu-shares 512: means set the weight to 512

Container 1: Create a container named cpu1024, and set the weight of the cpu to 1024

[root@zwb_docker data]# docker run -itd --name cpu1024 --cpu-shares 1024 centos:stress stress -c 10
b850fe8bbb77b37320f0233660c20e81128802d26ff239d72d8d64a83030bdab

--cpu-shares 1024: means set the weight to 1024

Put two days of containers together for comparison

docker stats

The container ID is b850fe8bbb77, the name is cpu1024, and the cpu weight is 1024

The container ID is 9758ca391cb0, the name is cpu512, and the cpu weight is 512

Result: CPU1024 has half more chances to get CPU resources than cpu512 (the weight value is assigned the chance to get CPU resources, not the CPU resources)

3. CPU cycle

1. Implementation plan

Docker provides two parameters --cpu-period and --cpu-quota to control the CPU clock cycle that the container can allocate to. --cpu-period is used to specify how long the container will redistribute the CPU usage.

2. Experiment

For example: the container process needs to use a single CPU for 0.2 seconds every 1 second, you can set the cpu-period to 100000 (ie 1 second), and the cpu-quota to 20000 (0.2 seconds)

First check the default configuration, file location

[root@zwb_docker 9758ca391cb02d198b42da4a21c44d8d66ee1658e5a4a9561faf251ec317d32a]# pwd
/sys/fs/cgroup/cpu/docker/9758ca391cb02d198b42da4a21c44d8d66ee1658e5a4a9561faf251ec317d32a

[root@zwb_docker 9758ca391cb02d198b42da4a21c44d8d66ee1658e5a4a9561faf251ec317d32a]# cat cpu.cfs_quota_us
-1 ### The value is 1, indicating that no settings have been made

Create a container and set it up:

[root@zwb_docker ~]# docker run -itd --cpu-period 100000 --cpu-quota 20000 centos:stress 
765c8c86912f752395d35d8a640432f6c9d69f32b702821bdec5b1464f26d968
[root@zwb_docker ~]# cd /sys/fs/cgroup/cpu/docker/765c8c86912f752395d35d8a640432f6c9d69f32b702821bdec5b1464f26d968/
[root@zwb_docker 765c8c86912f752395d35d8a640432f6c9d69f32b702821bdec5b1464f26d968]# ls
cgroup.clone_children  cgroup.procs  cpuacct.usage         cpu.cfs_period_us  cpu.rt_period_us   cpu.shares  notify_on_release
cgroup.event_control   cpuacct.stat  cpuacct.usage_percpu  cpu.cfs_quota_us   cpu.rt_runtime_us  cpu.stat    tasks
[root@zwb_docker 765c8c86912f752395d35d8a640432f6c9d69f32b702821bdec5b1464f26d968]# cat cpu.cfs_quota_us
20000
 

4. CPU Core Control

For multi-core CPU servers, Docker can also control which CPU cores are used by the container to run, that is, use the --cpuset-cpus parameter. This is especially useful for servers with multiple CPUs, enabling performance-optimized provisioning of containers requiring high-performance computing.

[root@zwb_docker ~]# docker run -itd --name cpu1 --cpuset-cpus 0-1 centos:stress 

## means that when the container is running, only two cpus, 0 and 1, can be used for processing

[root@zwb_docker ~]# docker run -itd --name cpu1 --cpuset-cpus 0-1 centos:stress 
524ca820088440a12d0bd2d7844fa2bb5c348282d7cb055b4d24761851c9bd21

5. docker build 

For the convenience of use, we can also fill in the resource limit when building the image

Grammar format:

docker build [options]

options:

  • --build-arg=[] : Set the variables when the image is created;

  • --cpu-shares : Set the cpu usage weight;

  • --cpu-period : limit CPU CFS period;

  • --cpu-quota : limit CPU CFS quota;

  • --cpuset-cpus : Specify the CPU id to use;

  • --cpuset-mems : Specify the memory id used;

  • --disable-content-trust : Ignore verification, enabled by default;

  • -f : Specify the Dockerfile path to use;

  • --force-rm : Delete intermediate containers during mirroring setup;

  • --isolation : use container isolation technology;

  • --label=[] : Set the metadata used by the image;

  • -m : set the maximum memory;

  • --memory-swap : Set the maximum value of Swap to memory + swap, "-1" means unlimited swap;

  • --no-cache : The process of creating the image does not use the cache;

  • --pull : try to update the new version of the image;

  • --quiet, -q : Quiet mode, only output image ID after success;

  • --rm : delete the intermediate container after setting the image successfully;

  • --shm-size : Set the size of /dev/shm, the default value is 64M;

  • --ulimit : Ulimit configuration.

  • --squash : Squash all operations in Dockerfile into one layer.

  • --tag, -t:  The name and tag of the image, usually in name:tag or name format; multiple tags can be set for an image in one build.

  • --network:  default default. Set the network mode of the RUN instruction during build

Guess you like

Origin blog.csdn.net/m0_62948770/article/details/127437960