Docker's log management work and time zone localization settings (the docker log is too large and the disk space is occupied by two processing methods, as well as the accurate setting of the time zone)

  Docker's log management work and time zone localization settings (the docker log is too large and the disk space is occupied by two processing methods, as well as the accurate setting of the time zone)

It's a long story. In a certain migration server, the server ran a Python project. Since the original server and the target server have different operating systems, the migration work was not smooth. Therefore, it was decided to use docker to containerize the Python project.

Just do it, docker is indeed an efficiency artifact, and the migration was completed in half a day, but there are two pits in front of it. It's a pity that I fell into it. The first pit is the log of docker, and the second pit is the time zone problem after docker containerization.

First of all, let’s talk about docker logs. There are two types of docker logs. The first is the docker engine log, and the second is the service log when the docker container is running. The default is json-file format, which is driven by json-file logs. The output of STOUT/STDERR from the container is written to a file in JSON format. The log contains not only the output log, but also the timestamp and output format.

All drivers are as follows:

none The running container has no logs, and docker logs does not return any output.
local Logs are stored in a custom format, designed to minimize overhead.
json-file The log format is JSON. Docker's default logging driver.
syslog Write log messages to syslog. The syslog daemon must be running on the host.
journald Write log messages to journald. The journald daemon must be running on the host.
art Write log messages to Graylog Extended Log Format (GELF) endpoints, such as Graylog or Logstash.
fluentd Write log messages to fluentd (forwarding input). The fluentd daemon must be running on the host.
awslogs Write log messages to Amazon CloudWatch Logs.
splunk Use the HTTP event collector to write log messages to splunk.
etwlogs Write log messages as Event Tracing for Windows (ETW) events. Only applicable to Windows platform.
gcplogs Write log messages to Google Cloud Platform (GCP) Logging.
lodgings

 

Write log messages to Rapid7 Logentries.

  

The commonly used drivers are json-file, syslog and journald. json-file is the default log driver.

Docker logs are usually saved in the /var/lib/docker/containers/container name/ directory by default. If it is the default json-file driver, the log file name is container name-json-log. Of course, docker logs also have levels , Usually the level is info, if it is debug, then the log growth will be very fast.

Going back to the migration project mentioned above, I did not make any restrictions or changes to the log, that is, all the logs are defaulted, and the migration was completed smoothly, so I waited for the acceptance, and then about four or five days passed. You need to make a small change on the migrated machine. After logging in to the server, cd and switching directories report an error: bash: cannot create temp file for here-document: No space left on device, which makes people very strange. Out of disk space? ?

Looking at the df -ah command, the root directory uses 100%. (This is not the server, it is my experimental machine)

How to do? find / -type f -size +100M -print0 | xargs -0 du -h | sort -nr This command finds files larger than 100M and finds the /var/lib/docker/containers/container name/container name -json.log file It was more than 500 g, and then I used the wrong command rm -rf to delete the file name. I didn’t expect that the disk was still occupying 100%, so I restarted the docker service and restarted the container. It was still 100%, but I was helpless and recovered after restarting the server. Normal disk usage.

After consulting the information, it is found that the log file emptying is generally not deleting the rm command, but the emptying method like> file name .

So, how to deal with the brutal growth of docker logs?

There are two methods as follows. First, timed tasks clear the log, but this method treats the symptoms but not the root cause. Second, limit the size of the log, specify how large the log can only be, and it will not increase if it exceeds the set value. This method is implemented through docker's own engine. The problem of unlimited log growth can be eradicated. Third, change the docker's working directory to a partition with more disk space, and combine the first method to clean up logs with regular scheduled tasks.

Then, the first method can be considered abandoning, and naturally use the most efficient second method.

Usually, in centos series operating systems, docker uses systemctl to manage services. Of course, docker also has such a service management script file, find it, and modify its working path to another partition. Finding such a file is relatively simple,

The output of systemctl status docker is as follows:

vim /etc/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/local/bin/dockerd --graph=/opt/docker
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

For example, now I change the docker working directory to the /opt/docker directory, and then restart the docker service, the restart command is systemctl daemon-reload&&systemctl restart docker

[root@centos11 ~]# cd /opt/docker/
[root@centos11 docker]# ls
builder  buildkit  containerd  containers  image  network  overlay2  plugins  runtimes  swarm  tmp  trust  volumes

Just move the original directory to the new directory, usually the original directory is /var/lib/docker/, please stop the docker service before moving.  Then write crontab timing tasks, this will not be demonstrated. After the move is complete, start the docker service. The above is the third method combined with the first method, just transfer the log file directory and delete the log regularly, which is actually not good. The second method should be used.

The second method to limit log growth can be divided into three forms, 1. Global limit, that is, all docker containers uniformly use a limited value, whether it is the original started image or the image that will be started in the future, all limit the log size , 2. In the docker run phase, a certain container individually limits the log size. 3. docker-compose orchestration limits the log size of each container in the group when starting a group of containers.

Global restrictions :newconstruction /etc/docker/daemon.json, if it would not have built. Add log-dirver and log-opts parameters,

{
  "log-driver":"json-file",
  "log-opts": {"max-size":"500m", "max-file":"3"}
}

max-size=500m means that the maximum log size of a container is 500M, and max-file=3 means that a container has three logs, namely id+.json, id+1.json, id+2.json . The driver used is the json-file driver. If it is to experiment to see the effect, then the value of max-size is changed to 2M, and the number of files is changed to 1. When I experimented, I used the mysql image. After restarting the container repeatedly, it was soon The log reached 512k, and it is not growing, which proves that the method is indeed feasible.

Individual restrictions when the container is run :

--log-driver json-file --log-opt max-size=10m, for example, add this paragraph after docker run, then the container log started with this command will be limited to 10m in size, and the log used The driver is json-file.

[root@centos11 ~]# docker images
REPOSITORY                    TAG                 IMAGE ID            CREATED             SIZE
hub.c.163.com/library/mysql   latest              9e64176cd8a2        3 years ago         407MB
[root@centos11 ~]# docker run -itd --name mysql1 --log-driver json-file --log-opt max-size=10m -e MYSQL_ROOT_PASSWORD=123456 hub.c.163.com/library/mysql
bf402735afc0d18b0a72c68850a0b1f3579e810594a1bf9267e7183a6c47e379

The log of this MySQL container will be kept at a size of 10m. The log driver used is json-file.

Docker-compose orchestration file limit : For example, a nginx container, limit its log size to 2G.

nginx: 
  image: nginx:1.12.1 
  restart: always 
  logging: 
    driver: “json-file” 
    options: 
      max-size: “2g” 

Regarding the time zone issue :

The time zone issue is a big problem. If the time is not unified, the business will be messy. The reason is that most Docker images are made based on basic images such as Alpine, Ubuntu, Debian, and CentOS. Basically, UTC time is adopted, and the default time zone is zero time zone.

1. Change the container time zone by passing environment variables

  • Suitable for Docker images made based on Debian base images and CentOS base images
  • Not applicable to Docker images based on Alpine base images, Ubuntu base images

For example, the previous MySQL mirror starts like this:

root@centos11 ~]# docker run -it --name mysql2 -e TZ=Asia/Shanghai -e MYSQL_ROOT_PASSWORD=12345 -p 3306:3306 hub.c.163.com/library/mysql /bin/bash
root@d0d8ba4c6135:/# date
Sat Jan 23 16:31:39 CST 2021
root@d0d8ba4c6135:/# exit
[root@centos11 ~]# docker images
REPOSITORY                    TAG                 IMAGE ID            CREATED             SIZE
hub.c.163.com/library/mysql   latest              9e64176cd8a2        3 years ago         407M

2. Specify environment variables during docker-compose orchestration. The same applies only to the debian series of operating system images, or take the MySQL image used above as an example:

version: '3'

services:
  mysql:
    container_name: mysql_57
    image: hub.c.163.com/library/mysql:5.7.18
    restart: always
    environment:
      MYSQL_USER: admin
      MYSQL_PASSWORD: admins
      MYSQL_DATABASE: database
      MYSQL_ROOT_PASSWORD: admin
      TZ: Asia/Shanghai
    volumes:
      - /usr/local/mysql/data:/var/lib/mysql
    ports:
      - 3306:3306
    network_mode: host

3. Thoroughly solve the time zone problem and recreate the image through dockerfile

(1). Alpine

Add the following code to the Dockerfile:

ENV TZ Asia/Shanghai

RUN apk add tzdata && cp /usr/share/zoneinfo/${TZ} /etc/localtime \
    && echo ${TZ} > /etc/timezone \
    && apk del tzdata

 

(2). Debian

The tzdata package has been installed in the Debian base image, we can add the following code to the Dockerfile:

ENV TZ=Asia/Shanghai \
    DEBIAN_FRONTEND=noninteractive

RUN ln -fs /usr/share/zoneinfo/${TZ} /etc/localtime \
    && echo ${TZ} > /etc/timezone \
    && dpkg-reconfigure --frontend noninteractive tzdata \
    && rm -rf /var/lib/apt/lists/*

(3). Ubuntu

The tzdata package is not installed in the Ubuntu base image, so we need to install the tzdata package first.

We can add the following code to the Dockerfile.

ENV TZ=Asia/Shanghai \
    DEBIAN_FRONTEND=noninteractive

RUN apt update \
    && apt install -y tzdata \
    && ln -fs /usr/share/zoneinfo/${TZ} /etc/localtime \
    && echo ${TZ} > /etc/timezone \
    && dpkg-reconfigure --frontend noninteractive tzdata \
    && rm -rf /var/lib/apt/lists/*

(4). CentOS

The tzdata package has been installed in the CentOS base image, we can add the following code to the Dockerfile.

ENV TZ Asia/Shanghai

RUN ln -fs /usr/share/zoneinfo/${TZ} /etc/localtime \
    && echo ${TZ} > /etc/timezone

 

 

 

Guess you like

Origin blog.csdn.net/alwaysbefine/article/details/113005296