Configuring the timing cleaning tasks and redundant image container file

k8s cluster running for a long time, will inevitably lead to a lot of useless mirror and container file, requiring frequent cleaning.

Generally, the default Docker container located in the local data storage path / var / lib / docker path through df -h / var / lib / docker command to view their occupancy. If more than 80%, it means that need to be cleaned up.

First, clean up the command

Redundant data associated with the clean-up of the container there are two commands, namely:

docker image prune -af
docker system prune -f

Wherein the first command is not only the image file is used to clear a container, removal of excess second command data, comprising stopping the container, the extra mirror, unused volume, etc., than the first command comprising the more content.

Above two commands can be executed by a unified script clean.sh:

#! /bin/bash 

need_clean() {
    used=`df -h /var/lib/docker | awk -F"[ %]+" '/dev/{print $5}'`
    if [[ $used -ge 80 ]]; then
        return 0
    fi
    
    return 1

}

if need_clean; then
    docker image prune -af
    if need_clean; then
        docker system prune -f
    fi
fi

Here for df -h The occupancy ratio in the column by awk. -F "[%] +" means that the specified number of spaces or delimiters%, whereby the percentage% excluded.

Only when the occupancy ratio of greater than 80% when the clear command execution. If, after the first command dropped below 80%, it does not have to execute the next command.

Second, create a scheduled task

Script finished, but we hope the machine can automatically run it at the right time. This requires regular task.

Create a scheduled task in linux is actually very simple. If you run a system crond program, you only need to create a user file with your name in the / var / spool / cron directory, crond automatically run it at the right time.

For example, we create a root file is in / var / spool / cron directory as the root user management machine:

@daily   bash /usr/local/bin/clean.sh

The script copied to / usr / local / bin directory (of course also be other directory), then crond will execute the script every day at midnight.

Third, add the file locks

One likely scenario is that, if the time is set to run the script execution per minute, is likely to appear on a command has not been performed, the next command came. Then it will cause confusion. To cope with this situation, we need to flock command, add a lock for the file. So above the root file can be rewritten as:

@daily  flock -n /tmp/.cleanlock -c "bash /usr/local/bin/clean.sh"

Which, / tmp / .cleanlock can be specified easily. -n mean if a lock is not obtained, no action. Of course, as the execution day, then, due to a longer period, you can not add a file lock.

Through the above operation, the system will be in every day of the night 0:00 to automatically check whether you need to clean up redundant data, ensure sufficient resources to run the cluster.

Guess you like

Origin www.cnblogs.com/00986014w/p/12482591.html