Actual combat: ELK environment deployment and collection of springboot project logs

foreword

I believe that as a senior brick mover, it is inevitable to check the application system log when dealing with problems, and can accurately and quickly solve practical problems based on this log. Under normal circumstances, our system logs are placed under the running directory of the package, which is very inconvenient to view and classify. So. Today we introduce ELK's log processing architecture to solve it.
insert image description here

Technology accumulation

ELK composition and function

ELK is the abbreviation of logstash, elasticsearch, and kibana. Like its name, the elk architecture is to integrate these three middleware to build a log system.

First, we use the system to integrate the logstash client and collect logs and upload them to the logstash server for filtering and conversion. The converted logs are written to elasticsearch. The powerful functions of es provide data storage, word segmentation and inverted index to improve query efficiency; finally, kibana is directly an analysis and visualization platform for rendering log data.

Framework to build the foundation

In order to facilitate the construction of our architecture, we use docker-compose for container arrangement. As long as the three components of elk are saved under the same network, they can communicate according to the service name.

Of course, for the externally exposed interfaces, we only need to expose logstash for data upload, and es for external data query. Each application service must have its own logstash configuration, providing input and output paths and filtering parameters in the configuration, and we also need to expose the ports to facilitate data upload.

EIK environment construction

elk目录下文件树:
./
├── docker-compose.yml
├── elasticsearch
│ ├── config
│ │ └── elasticsearch.yml
│ ├── data
│ └── logs
├── kabana
│ └── config
│ └── kabana.yml
└── logstash
├── config
│ ├── logstash.yml
│ └── small-tools
│ └── demo.config
└── data

Elasticsearch configuration related

mkdie elk
#Add es directory
cd elk
mkdir -p ./elasticsearch/logs ./elasticsearch/data ./elasticsearch/config
chmod 777 ./elasticsearch/data
#./elasticsearch/config Add es configuration file
cd elasticsearch/config
vim elasticsearch.yml

cluster.name: "docker-cluster"
network.host: 0.0.0.0
http.port: 9200
# 开启es跨域
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: Authorization
# 开启安全控制
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true

Kibana configuration related

cd elk
mkdir -p ./kibana/config
#./kibana/config Add kibana configuration file
cd kibana/config
vim kibana.yml

server.name: kibana
server.host: "0.0.0.0"
server.publicBaseUrl: "http://kibana:5601"
elasticsearch.hosts: [ "http://elasticsearch:9200" ] 
xpack.monitoring.ui.container.elasticsearch.enabled: true
elasticsearch.username: "elastic"
elasticsearch.password: "123456"
i18n.locale: zh-CN

Logstash configuration related

cd elk
mkdir -p ./logstash/data ./logstash/config ./logstash/config/small-tools
chmod 777 ./logstash/data
#./logstash/config 下 展开logstash configfilecd
logstash/config
vim logstash.yml

http.host: "0.0.0.0"
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]
xpack.monitoring.elasticsearch.username: "elastic"
xpack.monitoring.elasticsearch.password: "123456"

#./logstash/config/small-tools Add demo project monitoring configuration file
cd small-tools
vim demo.config

input { #输入

    tcp {
        mode => "server"
        host => "0.0.0.0"   # 允许任意主机发送日志
        type => "demo"      # 设定type以区分每个输入源
        port => 9999
        codec => json_lines # 数据格式
    }

}


filter {
    mutate {
        # 导入之过滤字段
        remove_field => ["LOG_MAX_HISTORY_DAY", "LOG_HOME", "APP_NAME"]
        remove_field => ["@version", "_score", "port", "level_value", "tags", "_type", "host"]
    }
}


output { #输出-控制台
    stdout{
        codec => rubydebug
    }
}


output { #输出-es

    if [type] == "demo" {
        elasticsearch {
            action => "index"                       # 输出时创建映射
            hosts  => "http://elasticsearch:9200"   # ES地址和端口
            user => "elastic"                       # ES用户名
            password => "123456"                    # ES密码
            index  => "demo-%{+YYYY.MM.dd}"         # 指定索引名-按天
            codec  => "json"
        }
    }

}

Add docker-compose file in elk directory

docker-compose.yml

version: '3.3'
networks:
  elk:
    driver: bridge
services:
  elasticsearch:
    image: registry.cn-hangzhou.aliyuncs.com/zhengqing/elasticsearch:7.14.1
    container_name: elk_elasticsearch
    restart: unless-stopped
    volumes:
      - "./elasticsearch/data:/usr/share/elasticsearch/data"
      - "./elasticsearch/logs:/usr/share/elasticsearch/logs"
      - "./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml"
    environment:
      TZ: Asia/Shanghai
      LANG: en_US.UTF-8
      TAKE_FILE_OWNERSHIP: "true"  # 权限
      discovery.type: single-node
      ES_JAVA_OPTS: "-Xmx512m -Xms512m"
      ELASTIC_PASSWORD: "123456" # elastic账号密码
    ports:
      - "9200:9200"
      - "9300:9300"
    networks:
      - elk

  kibana:
    image: registry.cn-hangzhou.aliyuncs.com/zhengqing/kibana:7.14.1
    container_name: elk_kibana
    restart: unless-stopped
    volumes:
      - "./kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml"
    ports:
      - "5601:5601"
    depends_on:
      - elasticsearch
    links:
      - elasticsearch
    networks:
      - elk

  logstash:
    image: registry.cn-hangzhou.aliyuncs.com/zhengqing/logstash:7.14.1
    container_name: elk_logstash
    restart: unless-stopped
    environment:
      LS_JAVA_OPTS: "-Xmx512m -Xms512m"
    volumes:
      - "./logstash/data:/usr/share/logstash/data"
      - "./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml"
      - "./logstash/config/small-tools:/usr/share/logstash/config/small-tools"
    command: logstash -f /usr/share/logstash/config/small-tools
    ports:
      - "9600:9600"
      - "9999:9999"
    depends_on:
      - elasticsearch
    networks:
      - elk
      

View the elk directory file tree

yum -y install tree
#View 4 layers of
tree under the current directory -L 4
#Display all files and folders
tree -a
#Display size
tree -s

[root@devops-01 elk]# pwd
/home/test/demo/elk
[root@devops-01 elk]# tree ./
./
├── docker-compose.yml
├── elasticsearch
│ ├── config
│ │ └── elasticsearch.yml
│ ├── data
│ └── logs
├── kabana
│ └── config
│ └── kabana.yml
└── logstash
├── config
│ ├── logstash.yml
│ └── small-tools
│ └── demo.config
└── data

10 directories, 5 files

Arranging elk

docker-compose up -d

Orchestration is successful Check whether the container is successfully started

[root@devops-01 elk]# docker ps | grep elk

edcf6c1cecb3 registry.cn-hangzhou.aliyuncs.com/zhengqing/kibana:7.14.1 “/bin/tini – /usr/l…” 6 minutes ago Up 10 seconds 0.0.0.0:5601->5601/tcp, :::5601->5601/tcp elk_kibana
7c24b65d2a27 registry.cn-hangzhou.aliyuncs.com/zhengqing/logstash:7.14.1 “/usr/local/bin/dock…” 6 minutes ago Up 13 seconds 5044/tcp, 9600/tcp elk_logstash
b4be2f1c0a28 registry.cn-hangzhou.aliyuncs.com/zhengqing/elasticsearch:7.14.1 “/bin/tini – /usr/l…” 6 minutes ago Up 6 minutes 0.0.0.0:9800->9200/tcp, :::9800->9200/tcp, 0.0.0.0:9900->9300/tcp, :::9900->9300/tcp elk_elasticsearch

The orchestration successfully accesses the kibana page
http://10.10.22.174:5601/app/home#/
insert image description here

Springboot integrates logstash

pom.xml

<!--logstash start-->
<dependency>
    <groupId>net.logstash.logback</groupId>
    <artifactId>logstash-logback-encoder</artifactId>
    <version>6.6</version>
</dependency>
<!--logstash end-->

logback-spring.xml

<springProfile  name="uat">
    <appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>10.10.22.174:9999</destination>
        <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder"/>
    </appender>

    <root level="INFO">
        <appender-ref ref="logstash"/>
    </root>
</springProfile>

Start the project logstash to collect logs

kibana configuration view log
http://10.10.22.174:5601/app/home#/ Enter the ES user name and password to enter the kibana console
Click the management button to enter the management interface
insert image description here

Click the index mode to enter –> create index mode
insert image description here

Enter the configuration log expression –> click Next
insert image description here

Select timestamp --> create index mode
insert image description here

Created as shown below represents success
insert image description here

View log
menu click –>discover
insert image description here
insert image description here

write at the end

It is relatively simple to deploy and collect springboot project logs in the ELK environment. We only need to use docker containerization technology to build the elk framework, and then collect and upload data in our own projects. Of course, logstash, elasticsearch, and kibana, which are the components of elk, still need some basic understanding to facilitate operations in actual combat.

Guess you like

Origin blog.csdn.net/weixin_39970883/article/details/131802116