Docker network and resource control

Table of contents

1. Docker network implementation principle

1. Docker network mode

1.1 View docker network list

 The 1.2 option specifies the network mode of the container

2. Detailed explanation of network mode

2.1 host mode

①host mode

 ​edit

②container mode

③none mode

​edit

④ bridge mode

​edit

⑤Custom network

6. Resource Control

6.1cgroups has four major functions

6.2 Set the upper limit of CPU usage

6.3 Perform CPU stress test

6.4 Set the CPU resource usage ratio (only valid when multiple containers are set)

6.4 Set the container to bind the specified CPU


1. Docker network implementation principle

Docker uses Linux bridge to virtualize a Docker container bridge (docker0) on the host machine. When Docker starts a container, it will assign an IP address to the container according to the network segment of the Docker bridge, called Container-IP, and the Docker bridge is every The default gateway for each container. Because the containers in the same host are connected to the same bridge, the containers can communicate directly through the Container-IP of the container.

The Docker bridge is virtualized by the host, not a real network device. The external network cannot be addressed, which also means that the external network cannot directly access the container through the Container-IP. If the container wants external access to be accessible, you can map the container port to the host host (port mapping), that is, enable it with the -p or -P parameter when docker run creates the container, and use [host IP] when accessing the container: [container port] to access the container.

docker run -d --name test1 -P nginx					#随机映射端口(从32768开始)

docker run -d --name test2 -p 43000:80 nginx		#指定映射端口

 

docker ps -a

 Browser access: http:192.168.237.21:32769

1. Docker network mode

Host The container will not virtualize its own network card, configure its own IP, etc., but use the host's IP and port.
Container The created container will not create its own network card and configure its own IP, but will share the IP and port range with a specified container.
None This mode turns off the container's networking capabilities.
Bridge The default is this mode, which will allocate and set IP for each container, and connect the container to a docker0 virtual bridge, and communicate with the host through the docker0 bridge and iptables nat table configuration.
custom network You can customize the ip address

1.1 View docker network list

 The 1.2 option specifies the network mode of the container

 ●host mode: Use --net=host to specify.

●none mode: Use --net=none to specify.

●container mode: Use --net=container:NAME_or_ID to specify.

●bridge mode: Use --net=bridge to specify, default setting, can be omitted.

2. Detailed explanation of network mode

2.1 host mode

①host mode

Equivalent to the bridge mode in Vmware, it is in the same network as the host machine, but has no independent IP address. Docker uses Linux Namespaces technology to isolate resources, such as PID Namespace isolation process, Mount Namespace isolation file system, Network Namespace isolation network, etc. A Network Namespace provides an independent network environment, including network cards, routes, iptable rules, etc., which are isolated from other Network Namespaces. A Docker container is generally assigned an independent Network Namespace. However, if the host mode is used when starting the container, the container will not obtain an independent Network Namespace, but will share a Network Namespace with the host. The container will not virtualize its own network card, configure its own IP, etc., but use the host's IP and port.

 

②container mode

After understanding the host mode, this mode is easy to understand. This mode specifies that a newly created container shares a Network Namespace with an existing container instead of sharing it with the host. The newly created container will not create its own network card, configure its own IP, but share the IP, port range, etc. with a specified container. Similarly, in addition to the network aspects of the two containers, other things such as file systems and process lists are still isolated. The processes of the two containers can communicate through the lo network card device.

docker run -itd --name test1 centos:7 /bin/bash #--name 选项可以给容器创建一个自定义名称

docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3ed82355f811 centos:7 "/bin/bash" 5 days ago Up 6 hours test1

docker inspect -f '{
   
   {.State.Pid}}' 3ed82355f811 #查看容器进程号 25945

ls -l /proc/25495/ns #查看容器的进程、网络、文件系统等命名空间编号 
lrwxrwxrwx 1 root root 0 1月 7 11:29 ipc -> ipc:[4026532572] 
lrwxrwxrwx 1 root root 0 1月 7 11:29 mnt -> mnt:[4026532569]
lrwxrwxrwx 1 root root 0 1月 7 11:27 net -> net:[4026532575] 
lrwxrwxrwx 1 root root 0 1月 7 11:29 pid -> pid:[4026532573] 
lrwxrwxrwx 1 root root 0 1月 7 12:22 user -> user:[4026531837] 
lrwxrwxrwx 1 root root 0 1月 7 11:29 uts -> uts:[4026532570]
docker run -itd --name test2 --net=container:3ed82355f811 centos:7 /bin/bash docker

ps -a

③none mode

In none mode, the Docker container has its own Network Namespace, but does not perform any network configuration for the Docker container. In other words, this Docker container does not have network card, IP, routing and other information. In this network mode, the container only has the lo loopback network and no other network cards. This type of network cannot be connected to the Internet, and a closed network can well guarantee the security of the container.

④ bridge mode

The bridge mode is the default network mode of docker, without the --net parameter, it is the bridge mode.

Equivalent to the nat mode in Vmware, the container uses an independent network Namespace and connects to the docker0 virtual network card. Configure communication with the host through the docker0 bridge and the iptables nat table. This mode will allocate Network Namespace, set IP, etc. for each container, and connect a Docker container on a host to a virtual network bridge.

(1) When the Docker process starts, a virtual network bridge named docker0 will be created on the host, and the Docker container started on this host will be connected to this virtual bridge. The working mode of the virtual bridge is similar to that of a physical switch, so that all containers on the host are connected to a layer 2 network through the switch.

(2) Assign an IP from the docker0 subnet to the container, and set the IP address of docker0 as the default gateway of the container. Create a pair of virtual NIC veth pair devices on the host. Veth devices always appear in pairs. They form a data channel. Data entering from one device will come out from another device. Therefore, veth devices are often used to connect two network devices.

(3) Docker puts one end of the veth pair device in the newly created container, and names it eth0 (the network card of the container), and puts the other end in the host, named with a similar name like *, and adds this network device to in the docker0 bridge. It can be viewed through the brctl show command. veth

(4) When using docker run -p, docker actually makes DNAT rules in iptables to realize the port forwarding function. You can use iptables -t nat -vnL to view.

⑤Custom network

#Directly using the bridge mode cannot support the specified IP to run docker. For example, if you execute the following command, an error will be reported docker run -itd --name test3 --network bridge --ip 172.17.0.10 centos:7 /bin/bash

//创建自定义网络 #可以先自定义网络,再使用指定IP运行docker

docker network create --subnet=172.18.0.0/16 --opt "com.docker.network.bridge.name"="docker1" mynetwork
#docker1 为执行 ifconfig -a 命令时,显示的网卡名,如果不使用 --opt 参数指定此名称,那你在使用 ifconfig -a 命令查看网络信息时,看到的是类似 br-110eb56a0b22 这样的名字,这显然不怎么好记。

#mynetwork 为执行 docker network list 命令时,显示的bridge网络模式名称。

docker run -itd --name test4 --net mynetwork --ip 172.18.0.10 centos:7 /bin/bash

6. Resource Control

CPU resource control cgroups is a very powerful linux kernel tool. It can not only limit resources isolated by namespace, but also set weights for resources, calculate usage, control process start and stop, and so on. So cgroups (Control groups) realize the quota and measurement of resources.

6.1cgroups has four major functions

●Resource limit: You can limit the total amount of resources used by the task

●Priority allocation: through the number of allocated cpu time slices and the size of disk IO bandwidth, it is actually equivalent to controlling the task running priority

●Resource statistics: You can count the resource usage of the system, such as cpu duration, memory usage, etc.

●Task control: cgroup can perform operations such as suspending and resuming tasks

100000 1 50000 0.5

6.2 Set the upper limit of CPU usage

Linux uses CFS (Completely Fair Scheduler) to schedule the CPU usage of each process. The default scheduling period of CFS is 100ms. We can set the scheduling cycle of each container process and how much CPU time each container can use at most during this cycle.

Use --cpu-period to set the scheduling period, and use --cpu-quota to set the CPU time that the container can use in each period. The two can be used together. The effective range of the CFS period is 1ms~1s, and the corresponding value range of --cpu-period is 1000~1000000. The period is 100 milliseconds and the CPU quota of the container must not be less than 1ms, that is, the value of --cpu-quota must be >= 1000.

docker run -itd --name test5 centos:7 /bin/bash

docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3ed82355f811 centos:7 "/bin/bash" 5 days ago Up 6 hours test5

cd /sys/fs/cgroup/cpu/docker/3ed82355f81151c4568aaa6e7bc60ba6984201c119125360924bf7dfd6eaa42b/ cat cpu.cfs_quota_us -1

cat cpu.cfs_period_us

100000

#cpu.cfs_period_us: The period of cpu allocation (microseconds, so the file name is represented by us), the default is 100000.

#cpu.cfs_quota_us: Indicates the time (in microseconds) that the cgroups limit takes, and the default is -1, which means no limit . If it is set to 50000, it means that 50000/100000=50% of the CPU is occupied.

6.3 Perform CPU stress test

docker exec -it 3ed82355f811 /bin/bash vim /cpu.sh #!/bin/bash i=0 while true do let i++ done

chmod +x /cpu.sh ./cpu.sh

top

#You can see that this script takes up a lot of cpu resources

#Set a 50% ratio to allocate the upper limit of CPU usage time

docker run -itd --name test6 --cpu-quota 50000 centos:7 /bin/bash

#You can recreate a container and set a quota or cd /sys/fs/cgroup/cpu/docker/3ed82355f81151c4568aaa6e7bc60ba6984201c119125360924bf7dfd6eaa42b/ echo 50000 > cpu.cfs_quota_us docker exec -it 55 3ed823 f811 /bin/bash ./cpu.sh

top

#You can see that the cpu usage rate is close to 50%, cgroups have an effect on the control of cpu

6.4 Set the CPU resource usage ratio (only valid when multiple containers are set)

Docker specifies the CPU share through --cpu-shares, the default value is 1024, and the value is a multiple of 1024. #Create two containers as c1 and c2. If there are only these two containers, set the weight of the container so that the CPU resources of c1 and c2 account for 1/3 and 2/3.

docker run -itd --name c1 --cpu-shares 512 centos:7 docker run -itd --name c2 --cpu-shares 1024 centos:7

Enter the container separately for pressure test

yum install -y epel-release yum install -y stress stress -c 4

# Generate four processes, each of which repeatedly calculates the square root of the random number

#View the running status of the container (dynamic update)

docker stats

6.4 Set the container to bind the specified CPU

#先分配虚拟机4个CPU核数 
docker run -itd --name test7 --cpuset-cpus 1,3 centos:7 /bin/bash

#进入容器,进行压力测试 
yum install -y epel-release yum install stress -y stress -c 4

#退出容器,执行 top 命令再按 1 查看CPU使用情况。

Limitation on memory usage//-m(--memory=) option is used to limit the maximum memory that the container can use

docker run -itd --name test8 -m 512m centos:7 /bin/bash

docker stats

//Limit the available swap size, --memory-swap Emphasize that --memory-swap must be used together with --memory.

Normally, the value of --memory-swap includes the container's available memory and available swap. So -m 300m --memory-swap=1g means: the container can use 300M of physical memory, and can use 700M (1G - 300) of swap.

If --memory-swap is set to 0 or not set, the container can use twice the swap size of the -m value. If the value of --memory-swap is the same as the value of -m, the container cannot use swap. If the --memory-swap value is -1, it means that the memory used by the container program is limited, and the available swap space is not limited (the host can use as many swap containers as there are).

3. Restrictions on disk IO quota control (blkio) --device-read-bps: limit the read speed bps (data volume) on a certain device, and the unit can be kb, mb(M) or gb. Example: docker run -itd --name test9 --device-read-bps /dev/sda:1M centos:7 /bin/bash

--device-write-bps : Limit the write speed bps (data volume) on a certain device, the unit can be kb, mb(M) or gb. example:

docker run -itd --name test10 --device-write-bps /dev/sda:1mb centos:7 /bin/bash

--device-read-iops : limit the iops (number of times) of reading a device

--device-write-iops : limit the iops (number of times) written to a device

#Create a container and limit the write speed

docker run -it --name test10 --device-write-bps /dev/sda:1MB centos:7 /bin/bash

#Verify write speed by dd

dd if=/dev/zero of=test.out bs=1M count=10 oflag=direct

#Add the oflag parameter to avoid the file system

cache 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 10.0025 s, 1.0 MB/s

#Clean up the disk space occupied by docker docker system prune -a #Can be used to clean up the disk, delete closed containers, useless data volumes and networks

Production expansion failure: A large number of logs are full due to docker container failure, which will cause the disk space to be full. Solution 1. Clear logs

#!/bin/bash logs=$ (find /var/lib/docker/containers/ -name -json.log) for log in $logs do cat /dev/null > $log done

2. What to do when the log is full ###Set the number of docker log files and the size of each log vim /etc/docker/daemon.json { "registry-mirrors": [" http ://f613ce8f.m.daocloud. io "], "log-driver": "json-file",

#My one log format "log-opts": { "max-size" : "500m", "max-file" : "3"} The maximum parameter of the log is 500M. There are three log files in each log file in my largest container size is 500M }

After modification, you need to reload systemctl daemon-r

Guess you like

Origin blog.csdn.net/m0_71888825/article/details/132339196