docker-compose builds ELK+filebeat (7.6.2) to read springboot project logs

docker-compose builds ELK+filebeat to read springboot project logs


1. Introduction to log framework

1.1 Log facade (log abstraction layer)

SLF4j(Simple Logging Facade for Java)

1.2 Log implementation

Logback

1.3 Log level

The log level is divided into TRACE < DEBUG < INFO < WARN < ERROR < FATAL from low to high. If it is set to WARN, the information lower than WARN will not be output

1.4 logback multi-environment configuration

In order to make the configuration of springProperty effective, logback.xml needs to be modified to logback-spring.xml


2. Introduction to ELK

2.1 Elasticsearch

Distributed search engine. It has the characteristics of high scalability, high reliability, and easy management. It can be used for full-text search, structured search and analysis, and can combine the three. Elasticsearch is developed in Java based on Lucene. It is one of the most widely used open source search engines. Wikipedia, StackOverflow, Github, etc. build their own search engines based on it. In elasticsearch, the data of all nodes is equal.

2.2 Logstash

Data collection and processing engine. It supports dynamic collection of data from various data sources, and performs operations such as filtering, analyzing, enriching, and unifying the format on the data, and then stores them for subsequent use.

2.3 Kibana

Visualization platform. It can search and display index data stored in Elasticsearch. Use it to easily display and analyze data with charts, tables, and maps.

2.4 Filebeat

Lightweight data collection engine. Compared with the system resources occupied by Logstash, the system resources occupied by Filebeat are almost negligible. It is based on the source code of the original Logstash-fowarder. In other words: Filebeat is the new version of Logstash-fowarder, and it will also be the first choice of ELK Stack in Agent.


3. Virtual machine 1: Deploy elasticsearch and kibana (stand-alone version)

3.1 Prepare the virtual machine (CentOS-7-x86_64-DVD-2009.iso)

turn off firewall

systemctl stop firewalld
systemctl disable firewalld

set fixed ip

cd /etc/sysconfig/network-scripts
ll
vim ifcfg-ens33

Modification (some modification, no addition)

BOOTPROTO=static
IPADDR="192.168.10.20"
NETMASK="255.255.255.0"
GATEWAY="192.168.10.1"
DNS1="114.114.114.114"

Note: NETMASK is the same as the subnet mask in the picture
[The external link image transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the image and upload it directly (img-Dzxc00Cf-1690981690017)(src/main/resources/images/ Static IP gateway setting.png)]

restart gateway

systemctl restart network

3.2 Write docker-compose files for elasticsearch and kibana

version: '3.1'
services:
  elasticsearch:
    image: elasticsearch:7.6.2
    container_name: elasticsearch
    ports:
      - 9200:9200
    environment:
      - "cluster.name=elasticsearch"
      - "discovery.type=single-node"
      - "ES_JAVA_OPTS=-Xms256m -Xmx256m"
      - "ELASTIC_PASSWORD=ZhengJinWei123!"
      - "xpack.security.enabled=true"
    volumes:
      - ./plugins/:/usr/share/elasticsearch/plugins/
      - ./data/:/usr/share/elasticsearch/data/
  kibana:
    image: kibana:7.6.2
    container_name: kibana
    ports:
      - 5601:5601
    depends_on:
      - elasticsearch
    environment:
      - "elasticsearch.hosts=http://elasticsearch:9200"
      - "ELASTICSEARCH_USERNAME=elastic"
      - "ELASTICSEARCH_PASSWORD=ZhengJinWei123!"

3.3 Chinese kibana

Enter the container & open the file

docker exec -it kibana bash
cd config
vi kibana.yml

edit file

server.name: kibana
server.host: "0.0.0.0"
# IpAddress:docker inspect es查看es容器内部的ip地址
elasticsearch.hosts: [ "http://{IpAddress}:9200" ]
monitoring.ui.container.elasticsearch.enabled: true
elasticsearch.username: "elastic"
elasticsearch.password: "ZhengJinWei123!"
i18n.locale: "zh-CN"

exit & restart

exit
docker restart kibana

4. Virtual machine 2: Deploy logstash (stand-alone version)

4.1 Configure static IP

192.168.10.27

4.2 Write the docker-compose file of logstash

version: '3.1'
services:
  logstash:
    image: logstash:7.6.2
    container_name: logstash
    ports:
      - 9600:9600
      - 5044:5044
    environment:
      - TZ=Asia/Shanghai
      - LS_JAVA_OPTS=-Xmx256m -Xms256m
    volumes:
      #挂载logstash的配置文件
      - ./config/logstash.yml:/usr/share/logstash/config/logstash.yml
      - ./config/logstash.conf:/usr/share/logstash/pipeline/logstash.conf

4.3 Modify config/logstash.yml

http.host: "0.0.0.0"
xpack.monitoring.enabled: true
#docker的物理ip
xpack.monitoring.elasticsearch.hosts: [ "http://192.168.10.20:9200" ]
#es的用户密码
xpack.monitoring.elasticsearch.username: "elastic"
xpack.monitoring.elasticsearch.password: "ZhengJinWei123!"

The user password of elasticsearch needs to execute the command manually, or use command in compose

ssl.certificate_authority Copy the certificate of es into the container and mount it using the data volume

ca_trusted_fingerprint is printed in the log when es is started for the first time

4.4 Modify pipeline/logstash.conf

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["192.168.10.20:9200"]
    # 自定义字段
    index => "demo-%{+YYYY.MM.dd}"
    user => "elastic"
    password => "ZhengJinWei123!"
  }
  #控制台输出,方便调试
  stdout {}
}

5. Virtual machine three: deploy springboot project and filebeat

5.1 Configure static IP

192.168.10.28

5.2 Write the docker-compose file of filebeat

version: '3.1'
services:
  filebeat:
    container_name: filebeat
    image: elastic/filebeat:7.6.2
    volumes:
        #挂载日志输出路径
      - /usr/local/logs:/var/log/filebeat/logs
      - ./filebeat.yml:/usr/share/filebeat/filebeat.yml
      - ./data:/usr/share/filebeat/data
    ports:
      - 9000:9000

5.3 Modify filebeat.yml

output.logstash:
  # The Logstash hosts
  hosts: ["192.168.10.27:5044"]

# output.elasticsearch:
#  hosts: '${ELASTICSEARCH_HOSTS:elasticsearch:9200}'
#  username: '${ELASTICSEARCH_USERNAME:}'
#  password: '${ELASTICSEARCH_PASSWORD:}'

filebeat.inputs:
- type: log
  paths:
  # docker 容器里的日志位置
    - /var/log/filebeat/logs/*.log
  multiline:
    pattern: '^[[:space:]]+(at|\.{3})[[:space:]]+\b|^Caused by:'
    negate: false
    match: after
    timeout: 5s

Guess you like

Origin blog.csdn.net/m0_68705273/article/details/132071550