Integrate Springboot + docker + ELK to implement log collection and display

1. Introduction to ELK

Insert image description here
ELK is three components of Elastic, which work together to implement log collection.

  • ElasticSearch : A log distributed storage / search tool that natively supports cluster functions. Elasticsearch provides near-real-time search and analysis for all types of data. Whether it's structured or unstructured text, numeric data or geospatial data, Elasticsearch efficiently stores and indexes it in a way that supports fast searches.
  • Logstash : Used to collect , process and forward log information. It can collect data from multiple sources such as local disks, network services (self-monitoring ports, accept user logs), message queues, etc., convert the data, then filter, analyze and send the data. to favorite "repository (Elasticsearch, etc.)". Logstash can dynamically collect, transform and transmit data, regardless of format or complexity. Leverage Grok to derive structure from unstructured data, decode geographic coordinates from IP addresses, anonymize or exclude sensitive fields, and streamline overall processing.
  • Kibana : is an open source analysis and visualization platform for Elasticsearch, used to search and view data interactively stored in the Elasticsearch index. Using Kibana, you can perform advanced data analysis and display through various charts. And it can provide a log analysis-friendly web interface for Logstash and ElasticSearch, which can summarize, analyze and search important data logs. It can also make large amounts of data easier to understand. It is simple to operate, and the browser-based user interface can quickly create a dashboard (Dashboard) to display Elasticsearch query dynamics in real time.

To put it simply: Logstash collects, processes and forwards information, Elasticsearch stores and searches information, and Kibana displays information.

2. Logstash (brief introduction, because Logstash requires more configuration)

The Logstash event processing pipeline has three stages: inputsfiltersoutput . Inputs generate events ( collection ), filters modify them ( processing ), and outputs send them elsewhere ( forwarding ).

2.1、inputs

The input plugin is used to extract data, which can come from log files, TCP or UDP listeners, one of several protocol-specific plugins (such as syslog or IRC), or even a queuing system (such as Redis, AQMP or Kafka). This phase tags incoming events with metadata surrounding the source of the event.
Some commonly used inputs:

  • file: read from a file on the file system
  • redis: read from redis server
  • beats: handle events sent by Beats.
  • tcp: Reading events from a TCP socket

2.1、filters

Filters are intermediate processing devices in the Logstash pipeline. Filters can be combined with conditions to perform actions on events if they meet certain conditions. In other words, some processing can be done on the event.
Some useful filters include:

  • grok: Parse and construct arbitrary text. Grok is currently the best way in Logstash to parse unstructured log data into structured and queryable.
  • mutate: performs general transformations on event fields. You can rename, delete, replace, and modify fields in events.
  • drop: Completely delete an event, such as a debug event.
  • geoip: Add information about the geographical location of the IP address

2.1、outputs

outputs are the final stage of the Logstash pipeline and can load processed events into something else, such as ElasticSearch or another document database, or a queuing system such as Redis, AQMP, or Kafka. It can also be configured to communicate with APIs. An event can have multiple outputs, but once all outputs have been processed, the event has completed its execution.
Some commonly used outputs include:

  • elasticsearch: Send event data to Elasticsearch.
  • file: Write event data to a file on disk.
  • kafka: Write events to Kafka topic

3. SpringBoot + ELK environment construction

Introduction to the local environment:
linux:

pikaqiu@pikaqiu-virtual-machine:~$ uname -a
Linux pikaqiu-virtual-machine 5.11.0-27-generic #29~20.04.1-Ubuntu SMP Wed Aug 11 15:58:17 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

Docker environment:

pikaqiu@pikaqiu-virtual-machine:~$ docker version
Client: Docker Engine - Community
 Version:           20.10.0
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        7287ab3
 Built:             Tue Dec  8 18:59:53 2020
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.0
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       eeddea2
  Built:            Tue Dec  8 18:57:44 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.3
  GitCommit:        269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc:
  Version:          1.0.0-rc92
  GitCommit:        ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Docker-compose environment:

pikaqiu@pikaqiu-virtual-machine:~$ docker-compose version
docker-compose version 1.24.1, build 4667896b
docker-py version: 3.7.3
CPython version: 3.6.8
OpenSSL version: OpenSSL 1.1.0j  20 Nov 2018

3.1. ELK environment preparation

The ELK environment is built using dokcer-compose

3.1.1. Create directories and configuration files

1) Create elasticsearch data directory and plug-in directory

mkdir -p /home/pikaqiu/elk/elasticsearch/data
mkdir -p /home/pikaqiu/elk/elasticsearch/plugins
// elasticsearch数据文件夹授权,保证docker容器中读写权限
chmod 777 /home/pikaqiu/elk/elasticsearch/data

2) Create kibana directory and configure kibana.yml

mkdir -p /home/pikaqiu/elk/kibana/config
touch /home/pikaqiu/elk/kibana/config/kibana.yml
#
# ** THIS IS AN AUTO-GENERATED FILE **
#

# Default Kibana configuration for docker target
server.host: "0.0.0.0"
server.shutdownTimeout: "5s"
# 注意是你的本地IP
elasticsearch.hosts: [ "http://192.168.88.158:9200" ]
monitoring.ui.container.elasticsearch.enabled: true

#汉化
i18n.locale: "zh-CN"

3) Create the logstash directory and configure the logstash.conf file

mkdir -p /home/pikaqiu/elk/logstash/conf.d
touch /home/pikaqiu/elk/logstash/conf.d/logstash.conf

Edit and configure logstash.conf, the content is as follows:
The input plugin used by logstash here is tcp, which can be modified to file or beat according to your own needs.
Because two microservices need to be started, logstash needs to set up two pipelines here. I used two tcp, and the output plugin also made corresponding settings.

input {
  # 创建了两个微服务, 所以建立两个不同的输入,将两个服务的日志分别输入到不同的index中
  tcp {
    mode => "server" #server表示侦听客户端连接,client表示连接到服务器
    host => "0.0.0.0" #当mode为server时,表示要监听的地址。当mode为client时,表示要连接的地址。
    type => "elk1" #设定type以区分每个输入源。
    port => 4560 #当mode为server时,要侦听的端口。当mode为client时,要连接的端口。
    codec => json #用于输入数据的编解码器
  }
  
  tcp {
    mode => "server"
    host => "0.0.0.0"
    type => "elk2"
    port => 4660
    codec => json
  }
}

filter{
  # 可按照需求配置
}

output {
  if [type] == "elk1"{
    elasticsearch { 
      action => "index"  #输出时创建映射 
      hosts => "es:9200" #设置远程实例的主机(Elasticsearch的地址和端口,可以用es这个域名访问elasticsearch服务,看完docker-compose即可理解)。
      index => "elk1-%{+YYYY.MM.dd}" #指定事件要写入的索引名,对应于kibana中的索引模式 
    }
  }
  
   if [type] == "elk2"{
    elasticsearch {
      action => "index"
      hosts => "es:9200" 
      index => "elk2-%{+YYYY.MM.dd}" 
    }
  }
}

4) Create docker-compose.yml file and configure

touch /home/pikaqiu/elk/docker-compose.yml
version: '3.7'
services:
  elasticsearch:
    # 从指定的镜像中启动容器,可以是存储仓库、标签以及镜像 ID,如果镜像不存在,Compose 会自动拉去镜像
    image: elasticsearch:7.17.1
    container_name: elasticsearch
    privileged: true
    user: root
    environment:
      # 设置集群名称为elasticsearch
      - cluster.name=elasticsearch 
      # 以单一节点模式启动
      - discovery.type=single-node 
      # 设置使用jvm内存大小
      - ES_JAVA_OPTS=-Xms512m -Xmx512m 
    volumes:
      # 插件文件挂载 
      - /home/pikaqiu/elk/elasticsearch/plugins:/usr/share/elasticsearch/plugins 
      # 数据文件挂载
      - /home/pikaqiu/elk/elasticsearch/data:/usr/share/elasticsearch/data 
    ports:
      - 9200:9200
      - 9300:9300

  logstash:
    image: logstash:7.17.1
    container_name: logstash
    # 
    ports:
       - 4560:4560
       - 4660:4660
    privileged: true
    environment:
      - TZ=Asia/Shanghai
    volumes:
      #挂载logstash的配置文件
      - /home/pikaqiu/elk/logstash/conf.d/logstash.conf:/usr/share/logstash/pipeline/logstash.conf
    depends_on:
      - elasticsearch 
    links:
      #可以用es这个域名访问elasticsearch服务
      - elasticsearch:es 

  kibana:
    image: kibana:7.17.1
    container_name: kibana
    ports:
        - 5601:5601
    privileged: true
    links:
      #可以用es这个域名访问elasticsearch服务
      - elasticsearch:es 
    depends_on:
      #kibana在elatiscsearch启动之后再启动
      - elasticsearch 
    environment:
      # 设置访问elasticsearch的地址
      - elasticsearch.hosts=http://elasticsearch:9200
    volumes:
      - /home/pikaqiu/elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml

3.1.2. docker-compose starts elk

The complete directory structure is as follows:

pikaqiu@pikaqiu-virtual-machine:~/elk$ ll
-rw-rw-r--  1 pikaqiu pikaqiu 1525 427 15:16 docker-compose.yml
drwxrwxr-x  4 pikaqiu pikaqiu 4096 422 14:33 elasticsearch/
drwxrwxr-x  2 pikaqiu pikaqiu 4096 424 20:21 images/
drwxrwxr-x  3 pikaqiu pikaqiu 4096 422 14:33 kibana/
drwxrwxr-x  3 pikaqiu pikaqiu 4096 422 14:39 logstash/

Build and start the ELK container:

cd /home/pikaqiu/elk
docker-compose up -d

If an error occurs during startup, you need to close and delete the container before restarting it. Close the delete command:

docker-compose down

3.2. SpringBoot project construction

Two microservices are prepared here to simulate multiple microservice scenarios.

3.2.1. Microservice 1 (elk_test)

1. Engineering structure
image.png

  1. pom.xml file
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>2.6.7</version>
		<relativePath/> <!-- lookup parent from repository -->
	</parent>
	<groupId>com.pikaqiu</groupId>
	<artifactId>elk_test</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<name>demo</name>
	<description>Demo project for Spring Boot</description>
	<properties>
		<java.version>1.8</java.version>
	</properties>
	<dependencies>
		<dependency>
			<groupId>net.logstash.logback</groupId>
			<artifactId>logstash-logback-encoder</artifactId>
			<version>5.3</version>
		</dependency>

		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-web</artifactId>
		</dependency>

		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-test</artifactId>
			<scope>test</scope>
		</dependency>

	</dependencies>

	<build>
		<plugins>
			<plugin>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-maven-plugin</artifactId>
			</plugin>
		</plugins>
	</build>

</project>
  1. logback-spring.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <include resource="org/springframework/boot/logging/logback/defaults.xml"/>
    <include resource="org/springframework/boot/logging/logback/console-appender.xml"/>
    <!--应用名称-->
    <property name="APP_NAME" value="springboot-logback-elk1-test"/>
    <contextName>${APP_NAME}</contextName>
    <!--输出到logstash的appender-->
    <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <!--可以访问的logstash日志收集端口-->
        <destination>192.168.88.158:4560</destination>
        <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder"/>
    </appender>


    <root level="info">
        <appender-ref ref="LOGSTASH"/>
    </root>

</configuration>
  1. application.properties
server.port=8080
  1. TestController.java
package com.pikaqiu.controller;

import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class TestController {
    
    

    private Logger logger = (Logger)LogManager.getLogger(this.getClass());

    @RequestMapping("/index1")
    public void testElk(){
    
    
        logger.debug("======================= elk1 test ================");
        logger.info("======================= elk1 test ================");
        logger.warn("======================= elk1 test ================");
        logger.error("======================= elk1 test ================");
    }

}

3.2.2. Microservice 1 (elk_test2)

1. Engineering structure
image.png

  1. pom.xml file
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>2.6.7</version>
		<relativePath/> <!-- lookup parent from repository -->
	</parent>
	<groupId>com.pikaqiu</groupId>
	<artifactId>elk_test2</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<name>demo</name>
	<description>Demo project for Spring Boot</description>
	<properties>
		<java.version>1.8</java.version>
	</properties>
	<dependencies>
		<dependency>
			<groupId>net.logstash.logback</groupId>
			<artifactId>logstash-logback-encoder</artifactId>
			<version>5.3</version>
		</dependency>

		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-web</artifactId>
		</dependency>

		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-test</artifactId>
			<scope>test</scope>
		</dependency>

	</dependencies>

	<build>
		<plugins>
			<plugin>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-maven-plugin</artifactId>
			</plugin>
		</plugins>
	</build>

</project>
  1. logback-spring.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <include resource="org/springframework/boot/logging/logback/defaults.xml"/>
    <include resource="org/springframework/boot/logging/logback/console-appender.xml"/>
    <!--应用名称-->
    <property name="APP_NAME" value="springboot-logback-elk2-test"/>
    <!--输出到logstash的appender-->
    <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <!--可以访问的logstash日志收集端口-->
        <destination>192.168.88.158:4660</destination>
        <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder"/>
    </appender>

    <root level="info">
        <appender-ref ref="LOGSTASH"/>
    </root>

</configuration>
  1. application.properties
server.port=8081
  1. TestController.java
package com.pikaqiu.controller;

import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class TestController {
    
    

    private Logger logger = (Logger)LogManager.getLogger(this.getClass());

    @RequestMapping("/index2")
    public void testElk(){
    
    
        logger.debug("======================= elk2 test ================");
        logger.info("======================= elk2 test ================");
        logger.warn("======================= elk2 test ================");
        logger.error("======================= elk2 test ================");
    }

}

Note: The corresponding logstash log collection ports in the logback-spring.xml files of microservice 1 and microservice 2 are different, and the corresponding server.port in application.properties are different.

3.3. kibana configuration

  1. Enter http://192.168.88.158:5601/app/home to access the Kibana web interface. Click Settings on the left to enter the Management interface

image.png

  1. Select the index mode and click Create Index Mode

image.png

  1. Fill in the name and timestamp fields and create the index schema

image.png
image.png

  1. Select Discover and you can see the two index modes just created.

image.png
image.png

  1. Select elk1-* index mode, you can see the log generated by microservice 1

image.png

  1. Select elk2-* index mode to see the logs generated by microservice 2

image.png

reference:

  1. https://www.laobaiblog.top/2022/03/30/docker-compose%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2elk%E5%B9%B6%E9%9B%86%E6%88%90springboot/
  2. https://blog.csdn.net/weixin_43184769/article/details/84971532


Guess you like

Origin blog.csdn.net/hansome_hong/article/details/124585026