Logging mechanisms for Docker and Kubernetes

This blog address: https://security.blog.csdn.net/article/details/129586593

1. Log

The log shows the events or records generated by the running of the application, which can explain its running status in detail. Logs describe some discrete and discontinuous events, which are a good source of information for application visibility and provide accurate data sources for application analysis. However, log data has certain limitations. It depends on the content exposed by developers, and its storage and query consume a lot of resources.

2. Docker logs

Docker supports a variety of logging mechanisms to help users obtain information from running containers and services. This mechanism is called a log driver. Docker has supported log drivers since version 1.6, and users can output logs directly from containers to log systems such as syslogd.

Each Docker daemon has a default log driver, usually the default log driver is json-file, which saves log information in the form of a JSON file. At the same time, Docker also supports other log drivers, such as none, syslog, gelf, and fluentd.

The log drivers supported by Docker are as follows:

drive describe
none Logging is not enabled, no logs are available for this container, and no output is returned
local Logs are stored in a custom format designed to minimize overhead
json-file The log format is json, which belongs to the default log driver of Docker
syslog Write logs to syslog, make sure the syslog daemon is running on the Docker host
journaled Write logs to journald, making sure the journald daemon is running on the Docker host
art Write logs to an endpoint like graylog or Logstash
fluentd To write logs to fluentd, make sure fluentd is running on the host
awslogs Write logs to Amazon CloudWatch Logs
splunk Write logs to splunk using HTTP Event Collector
etulogs Write logs as HTTP Event Tracing for Windows (ETW) events, only for wondows
gcplogs Write logs to Google Cloud Platform
logentries Write logs to Rapid7 Logentries

View the log driver currently used by Docker:

docker info --format '{
   
   {.LoggingDriver}}'

Modify the Docker log driver:

vim /etc/docker/daemon.json
--------------------------------------------------
{
    
    
	// 修改为需要的日志驱动即可
	"log-driver":"local"
}

Configure the Docker log driver:

vim /etc/docker/daemon.json
--------------------------------------------------
{
    
    
	"log-driver": "json-file",
	"log-opts": {
    
    
		// 单个日志最大尺寸
		"max-size": "10m",
		// 日志文件的最大数量 
		"max-file": "3",
		// 日志的ladel
		"labels": "production_status",
		"env": "os,customer"
	}
}

Then restart the Docker service to take effect.

To configure a logging driver for a specific container:

# 对nginx容器设置日志驱动为local
docker run -d --log-driver local nginx
# 对nginx容器的日志驱动进行配置
docker run -d --log-driver local --log-opt max-size=10m --log-opt max-file=3 nginx

3. Kubernetes logs

Through application logging, you can better understand the internal running status of the application, and at the same time, it is very useful for debugging problems, monitoring cluster activities, and analyzing the security of the application running process.

In a cluster, logs should have independent storage and life cycle, independent from the life cycle of nodes, pods or containers. This is usually referred to as cluster-level logging. The cluster-level log architecture requires an independent backend for storing, analyzing, and querying logs.

For example, the pod created by this yaml prints log records through standard output every second:

apiVersion: v1
kind: Pod
metadata:
  name: counter
spec:
  containers:
  - name: count
    image: busybox
    args: [/bin/sh, -c, 'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']

Create pods:

kubectl apply -f aaa-pod.yaml

The log can be obtained by the following command:

kubectl logs aaa

For logging at the node level, any data written to stdout and stderr by the containerized application will be captured by the container engine and redirected to a certain location. For example, the Docker container engine redirects these two output streams to a log driver , the logging driver configured in Kubernetes to write to files in JSON format. By default, if the container is restarted, the kubelet will keep the log of the terminated container. If the Pod is deleted on the worker node, all the containers in the Pod will also be deleted, including the container's log records.

In Kubernetes, in addition to the logs of the application in the Pod, the logs of the Kubernetes system components also need to have a certain plan to record and store. The system component log mainly records the events that occurred in the cluster, which plays an important role in debugging and security auditing.

In Kubernetes, the system component log can configure the granularity of the log according to the needs, and flexibly adjust the detail level of the log record. Logs can be coarse-grained to only display errors within components, or more fine-grained. System components can be divided into two types depending on how they are deployed and run. One of them is running in the container, such as kube-scheduler, kube-proxy, etc.; the other is not running in the container, such as kubelet and container runtime. On servers using the systemd mechanism, the kubelet and container runtimes write logs to journald. Without systemd, they write logs to .log files in the /var/log directory.

Guess you like

Origin blog.csdn.net/wutianxu123/article/details/129586593