Stuck again, big data platform containerized operation and maintenance started


Hello everyone, I am Mr. Foot (o^^o)

For the construction of the big data basic platform, I used the big data components of the fully containerized Apache.

I was very happy before.

One day, when I was deploying a new service, I was suddenly prompted **"Cannot install a new container, prompting that the disk is full!"**

1. Background

Centos7 cloud server, memory 16G, hard disk 140G

Implement containerized deployment of the basic environment of the big data platform.

Because the log information containerized by the big data component is not cleaned regularly, it quickly fills up the disk.

In this place where cloud server resources are very expensive, if you have no money, you can only stare and enjoy a proper sense of powerlessness.

Unyielding in the end, I adopted a different plan and shared it with my friends! ! !

My cloud server is stuck, I really want to give it a fist. . .

  • The first column, Filesystem, represents the name of the file system.

  • The second column Size indicates the size of the file system.

  • The third column Used indicates how much disk space has been occupied.

  • The fourth column, Avail, indicates the amount of available disk space.

  • The fifth column Use% indicates how much the disk usage is, and 100% indicates that the disk is full.

  • The sixth column Mounted On indicates which directory is mounted.

2. Program Summary

  • Solution zero: transfer data and modify the default storage location of docker, which is a temporary solution but not a permanent solution, and cannot provide unlimited disks.

  • Option 1: Manually clean up the log files, which can solve the urgent need and treat the symptoms but not the root cause.

  • Solution 2: The script regularly cleans up log files. The disadvantage is that all log files are lost and cannot be traced back.

  • Solution 3: Limit the log file size of all containers, and solve the root cause. The disadvantage is that you need to recreate the container and start the docker image.

3. Program Implementation

3.0 Transfer data to modify docker default storage location

Modify the default storage location of docker

Although there are many ways to modify the docker default storage location, it is best to modify the docker default storage location to other large directories or disks as soon as docker is installed.

Avoid the risks caused by migrating data.

(1) transfer

  • stop docker service
systemctl stop docker
  • Create new docker directory

Execute the command df -h , find a large disk, and create a new directory for storing container logs

For example, I built the /home/modules/docker/lib directory under the /home directory

mkdir -p /home/modules/docker/lib
  • migration directory

Put the files under the /var/lib/docker directory into the /home/modules/docker/lib directory

Complete docker path after migration: /home/modules/docker/lib/docker

rsync -avz /var/lib/docker/ /home/modules/docker/lib/
  • Modify the docker configuration file

There are many configuration files on the Internet, and the space is limited, so I will not expand here.

vim /etc/systemd/system/docker.service

Add the following:

ExecStart=/usr/bin/dockerd  --graph=/home/modules/docker/lib/docker
  • restart docker
systemctl daemon-reload
systemctl restart docker
systemctl enable docker

3.1 Manual cleaning

cat /dev/null > /var/lib/docker/containers/容器id/容器id-json.log

Note: There is no rm method to delete files here.

After using rm -rf to delete the log, you will find that the disk space has not been released through df -h. The reason is that in a Linux or Unix system, deleting a file through rm -rf or a file manager will unlink (unlink) from the directory structure of the file system.

If the file is open (in use by a process), the process will still be able to read the file, and disk space will always be used.

The correct posture is cat /dev/null > *-json.log, of course, you can also restart docker after deleting it through rm -rf.

3.2 Timing container log cleanup

  • Write a script to clean up docker log files

In the directory you want to be in, write the cleanup script cleardockerlog.sh

#!/bin/sh 
echo "======== start clean docker containers logs ========"  
logs=$(find /var/lib/docker/containers/ -name *-json.log)  
for log in $logs  
        do  
                echo "clean logs : $log"  
                cat /dev/null > $log  
        done  
echo "======== end clean docker containers logs ========"
  • Add a scheduled task and set it to take effect immediately
crontab -e
0 2 */1 * * sh /root/clean_docker_log.sh

3.3 Limit Docker container log size

Create a new /etc/docker/daemon.json, if there is no need to create a new one.

vim /etc/docker/daemon.json

The configuration content is as follows:

{
"log-driver":"json-file",
"log-opts": {"max-size":"500m", "max-file":"3"}
}

max-size=500m, means that the upper limit of the log file size of the container is 500M, max-file=3, means that the container has three logs, write the second one when the first one is full 500M, and write the second one when the second one is full 500M Three, if the third is full, clear the first log file and rewrite the first log file. As shown below:

Write 3 log files, the maximum size is not more than 500M,
you need to restart the docker daemon process after modification


systemctl daemon-reload
systemctl restart docker

Note:
"This method is only valid for newly created containers, not for previous containers."

Verify that the container has the logging settings applied (as root):

/var/lib/docker/containers//hostconfig.json

Guess you like

Origin blog.csdn.net/shujuelin/article/details/131770061