Backend-"Super detailed analysis of springboot integrated elk log framework (elasticsearch+logstash+kibana)

Introduction

The elk log framework is the abbreviation of the three technologies of elasticsearch, logstash, and kibana. Elasticsearch is a search engine, logstash is a log management platform, and kibana is the gui of elasticsearch. The elk log framework is to display the log on the web page according to our requirements, and can perform aggregate query and filter analysis. No need to log in to the server for grep matching, and no need to pick errors slowly in the log file. First look at the renderings:

As follows: the location
This is the display of the log
of the log as follows: the analysis of the log
Log analysis

installation

These three programs can be downloaded from the official website (requires _微_皮_ ah), I have already packaged it here for convenience, and can be downloaded directly,
link: https://download.csdn.net/download/nienianzhi1744/13192854

The folder after decompression is as follows:

Unzip the directory

Installation of elasticsearch

1. [Configuration file modification] : Modify the elasticsearch.yml file in the elasticsearch-6.8.1/config directory and add the following configuration:

#允许跨域请求
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: Authorization,Content-Type
#开启安全校验
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.authc.accept_default_password: true

As shown in the figure below:
elasticsearch.yml
2. [Set login password] : cmd command to enter the elasticsearch-6.8.1/bin directory, enter the following command to set the login password:

elasticsearch-setup-passwords auto

Press y after pressing enter to execute, and the password will be automatically set, as shown in the figure below:
Set elasticsearch password
copy and paste the user name and password to record, they will be used later.
(There is a pit here: entering the command to automatically set the password under windows will not report an error, but entering the command elasticsearch-setup-passwords interactive to manually set the password will report an error: Connection failure to: http://127.0.0.1:9200/ _xpack/security/_authenticate?pretty failed: Connection refused: connect. I will study why later when I have time)

3. [Install elasticsearch tokenizer] : Do not close the window after the execution of the previous command is completed, continue to execute the command in the bin directory to install the tokenizer, this step takes a little longer, and the command will be automatically output in the middle to confirm whether to continue, we press y Continue, the window will automatically become available for input after the installation is complete:

elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.8.1/elasticsearch-analysis-ik-6.8.1.zip

(It must be installed here, otherwise an error will be reported during subsequent integration: analyzer [xxx] not found for field [xxx]])

4. [Start elasticsearch] : Double-click elasticsearch.bat to start elasticsearch. As shown in the figure below: After
Start elasticsearch
starting, the elasticsearch command window will pop up, do not close it, as shown in the figure below:
elasticsearch command window
5: [Browser access elasticsearch] : At this time, type in the browser: http://localhost:9200/
and type 2 in the pop-up window The user name and password automatically set by step:
User name: elastic
password: xxx The
interface shown below indicates that elasticsearch has been successfully started. This localhost:9200 webpage can be closed, and it is temporarily unavailable.
elasticsearch started successfully

kibana installation

1. [Configuration file modification] : Modify the kibana.yml file in the kibana-6.8.1-windows-x86_64\config directory and add the following configuration:

#中文汉化
i18n.locale: "zh-CN"
server.host: "0.0.0.0"
# ES的访问端口号
elasticsearch.url: "http://localhost:9200"
# ES的用户名和密码
elasticsearch.username: "elastic"
elasticsearch.password: "gJRr45HLoRVzoqyRaWxO"

As shown in the figure below:
Modify the configuration file
2: [Browser access to kibana] : Click on the kibana.bat file in the bin directory, and the cmd window will be as shown in the figure below after successful startup. Do not close the window.
Dynamic kibana
kibana command window
http://localhost:5601/. Enter the account name and password: the account name and password are the
username: "elastic"
password: "xxx" mentioned in the automatic setting above,
as shown in the figure below:
kibana's page

Installation of logstash

1. [Configuration file modification] : Create a new configuration file logstash.conf in the logstash-6.3.0\bin directory, and add the following configuration (there are two configurations below, just choose one):

The first configuration is to read the console log (I use this first): as follows:

#读取控制台日志
input {
    
     stdin {
    
     } }

input {
    
    
  tcp {
    
    
    host => "127.0.0.1"
    port => 9250
    mode => "server"
#    tags => ["tags"]
    codec => json_lines
    }
}

# output {
    
     stdout {
    
     codec => rubydebug } }
output {
    
    
  stdout{
    
    codec =>rubydebug}
  elasticsearch {
    
    
    hosts => ["localhost:9200"]
    index => "logback-%{+YYYY.MM.dd}"
    user => "elastic"
    password => "xxx"
  }
}

The second is to read the log in the file: as follows

input {
    
    
    file {
    
    
        path => "c:/opt/logs/java-contract-info.log" # 日志文件
        type => "elasticsearch"
        discover_interval => 3		  #心跳监听日志文件是否改变
        start_position => "beginning" #从文件开始处读写
    }
}

output {
    
    
 stdout{
    
    
  codec => rubydebug
 }
  elasticsearch {
    
    
   hosts => "localhost:9200"
   index => "logstash-%{+YYYY.MM.dd}"
   user => "elastic"
   password => "xxx"
  }
}

(I want to explain here. The most notable thing in the above configuration is: output.elasticsearch.index: "logstash-%{+YYYY.MM.dd}"
Here an index named "logstash-date" is created by default. It will be used in kibana
)
Add configuration file
2. [Start logstash] : cmd enters the logstash-6.3.0\bin directory and execute the command

logstash -f logstash.conf

Start logstash
After executing the command, I see: The output of Successfully started Logstash API endpoint {:port=>9600} indicates that Logstash has started up. Do not close this window, because Logstash will always monitor our idea console. This cmd window is also in an uneditable state. The idea console has matching logs (note that not all logs of the console will be printed in this window, only those with a matching format will be printed), and they will be output in the format here.

integrated

After the cumbersome configuration above, integration in springboot is relatively simple. Output the console log to kibana for visual query and analysis, without writing code (ElasticsearchConfig, EKLController, ESRepository, ESData, etc.), only need to add dependencies and make a configuration xml file. The following steps:

1. Add pom dependency:

Let's take a look at my project structure first. Add the following configuration files in the pom file of the module. (Before adding: If there is already a dependency of spring-boot-starter-web in the pom file, add exclusion to exclude duplicate log package references. Don’t worry if there is no.)

Excluding duplicate package statements:

<exclusions>
     <exclusion>
          <groupId>org.springframework.boot</groupId>
          <artifactId>spring-boot-starter-logging</artifactId>
     </exclusion>
</exclusions>

Project structure
Add the statement that pom depends on in the module that needs to build the logging system:

<!-- &lt;!&ndash;elasticsearch&ndash;&gt;
        <dependency>
            <groupId>org.springframework.data</groupId>
            <artifactId>spring-data-elasticsearch</artifactId>
            <version>3.2.1.RELEASE</version>
        </dependency>
        &lt;!&ndash; ElasticSearch &ndash;&gt;
        <dependency>
            <groupId>org.elasticsearch</groupId>
            <artifactId>elasticsearch</artifactId>
            <version>6.5.0</version>
        </dependency>-->
        <!-- Java High Level REST Client -->
        <!--<dependency>
            <groupId>org.elasticsearch.client</groupId>
            <artifactId>elasticsearch-rest-high-level-client</artifactId>
            <version>6.5.0</version>
            <exclusions>
                <exclusion>
                    <groupId>org.elasticsearch</groupId>
                    <artifactId>elasticsearch</artifactId>
                </exclusion>
            </exclusions>
        </dependency>-->
        <!--logstash-->
        <dependency>
            <groupId>net.logstash.logback</groupId>
            <artifactId>logstash-logback-encoder</artifactId>
            <version>5.2</version>
        </dependency>
        <dependency>
            <groupId>net.logstash.log4j</groupId>
            <artifactId>jsonevent-layout</artifactId>
            <version>1.6</version>
        </dependency>
        <dependency>
            <groupId>ch.qos.logback</groupId>
            <artifactId>logback-classic</artifactId>
            <exclusions>
                <exclusion>
                    <groupId>ch.qos.logback</groupId>
                    <artifactId>logback-core</artifactId>
                </exclusion>
            </exclusions>
            <version>1.1.8</version>
        </dependency>
        <dependency>
            <groupId>ch.qos.logback</groupId>
            <artifactId>logback-core</artifactId>
            <version>1.1.8</version>
        </dependency>

PS: Why should I comment out the above several dependencies, because only the log positioning and analysis does not seem to use the commented jar package above.

2. Add the logback.xml configuration file:

It is not difficult to see from the above project directory structure diagram that I put the log configuration file logback.xml and the springboot configuration file in the same directory.
The specific content of logback.xml is as follows:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <!--这个是logback包中自带的base.xml文件-->
    <include resource="org/springframework/boot/logging/logback/base.xml" />
    <!--对应的映射的配置名-->
    <appender name="deliver_log_appender"
              class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <!--配置logStash 服务地址,9250。9250是logstash-6.3.0\bin\logstash.conf文件中配置的输出端口号 -->
        <destination>127.0.0.1:9250</destination>
        <!-- 日志输出编码 -->
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <charset>utf8</charset>
            <!--Pattern为日志匹配的规则,也是索引的字段名,此处我们不指定,是因为在日志的打印的时候是直接打印的实体,
            实体与的字段与值是规则的json格式,所以此处不做额外处理-->
            <Pattern>%msg%n</Pattern>
        </encoder>
    </appender>

    <!--deliver_log是日志输出的名字-->
    <logger name="deliver_log" additivity="false" level="INFO">
        <!--appender-ref为映射的配置名-->
        <appender-ref ref="deliver_log_appender"/>
    </logger>
</configuration>

Output relationship
For springboot, the above configuration is OK, no need to write other code. Start springboot after configuration is complete

use

We visit: localhost:5601 and enter the kibana main page.
Click Discover on the top menu on the right, the output log named "deliver_log" in the system will be displayed here, and it will be displayed below only when there are logs. as follows
kibana log display page

1. Log positioning

In the command input column above discover: enter the conditions to query the related logs, and multi-condition queries can use connectors such as and, or. Very similar to sql statement
Log location

2. Log analysis

1: View index management

Before log analysis, an index mode needs to be created. Only after the index mode is created, the fields in the index can perform aggregate statistics or other operations.

Click: Management> Index Management to see our index, followed by a list of document numbers. After the index is created, if the log corresponding to the index has no log output, the number of documents is 0, and an index with 0 documents cannot create an index pattern. In other words, before creating a pattern, be sure to output the existing log (regardless of whether the output is in the console or in a file.)

ps: Some students may have questions, why the name in index management is logback-2020.11.24. This is because we configured in logstash-6.3.0\bin\logstash.conf in the previous steps. This name is customizable and can be changed.
Index management

2: Create index mode

Index name prefix + wildcard. (If the value is not matched, but the index name can be seen in the index management, then the index must have no data. At this time, we need to log output first, and output the log of the program business logic to match.) Click Next step
The first step in creating an index pattern
Select the time filter field name as shown in the figure below, and then click Create Index Mode.
Insert picture description here

After the creation is complete, we can scroll down and see that the index mode contains our entity-defined field names, as well as several field names that come with the elk system. These custom field names of our entities can be used to create visual charts.

Index field name
ps: After the index is created, there is no "searchable and aggregatable" green dot behind the field, then you need to click to refresh, and then press OK.
Refresh index field

3: Create a view

The operation is as follows: Visualization "click the + sign" which view is required (custom, I choose a table here) In the
Create view
previous step, I chose a table. So the view will be displayed in the table on the right.
On the left is the configuration aggregate function. Of course, the aggregation function does not need to be written by hand, you can click it directly in the drop-down box on the left.
For example, I choose the current elk to collect 6 printed entity logs. Each log has the message field in my entity. Then I want to query the number of each message and display it, similar to: message,count( message). You can make the following choices:
1: Indicator selection: Count
2: Click the bucket, click Split Row. Select message as the field, and sort according to the selection indicator: Count
3: Click the green triangle run button above to get the query result
Insert picture description here

to sum up

The log service of Alibaba Cloud, which has been used for log statistics before, is similar to this function, but Alibaba Cloud's log service is charged, so it is quite practical to study this elk by myself. This time I studied elk, and the blog of classmate Zhou gave me a lot of inspiration. You can check it out.
There is not enough time, and there are unavoidable omissions and errors in the blog. Please correct me. elk is a huge system. It is not enough to learn how to use it. I will update my learning experience in the future. Welcome to communicate with each other.

Guess you like

Origin blog.csdn.net/nienianzhi1744/article/details/110196556