Distributed ELK log collection system


1. What are the problems of traditional log collection
2. What are the solutions for distributed log collection
3. The role of ElasticSeach+Logstash+Kibana
4. Why does ELK need to be combined with kafka
5. Build ELK based on docker 6. The
springboot project integrates elk to achieve asynchronous log collection

1. What are the advantages and disadvantages of traditional log collection

In a traditional project, if there are multiple different server clusters in the production environment, if the production environment needs to locate the bugs of the project through logs, it is necessary to use the traditional command query on each node, which is very inefficient. Therefore, we need a centralized management log, and ELK came into being.
insert image description here

2. How Elk collects logs

Elk
E=ElasticSeach (store log information)
l Logstash (porter)
K Kibana connects to our ElasticSeach graphical interface to query logs

    Elk采集日志的原理:
1.	需要在每个服务器上安装Logstash(搬运工)
2.	Logstash需要配置固定读取某个日志文件
3.	Logstash将我们的日志文件格式化为json的格式输出到es中
4.	开发者使用Kibana连接到ElasticSeach 查询存储日志内容。

ELK log collection principle
ELK=ElasticSeach+Logstash+Kibana, the log collection principle is as follows.
1. Install the Logstash log collection system plug-in on each server cluster node
2. Each server node inputs the log into Logstash
3. Logstash formats the log into json format, creates different indexes every day, and outputs it to ElasticSearch
4. Browsers use installed Kibana to query log information
insert image description here

The disadvantage of this solution is that Logstash will be installed on each server node to perform read and write log IO operations, which may not perform very well and is redundant.

3. Why do you need to store logs in ElasticSeach instead of mysql?

The bottom layer of ElasticSeach uses the inverted index to search logs with high efficiency

4. Why do you need to use elk+kafka

1. If you simply use elk, the expansion of server nodes requires that our Logstash
steps be installed on each server, which is redundant.
2. Logstash reads the local log file, which may have a certain impact on the local disk io performance.

5. The principle of elk+kafka
  1. The springboot project will intercept the log
    log (error log) in the system based on the aop method.
    Error log: exception notification
    request and response log information - front or surround notification.
  2. Post the log to our kafka. Note that the process must be asynchronous.
  3. Logstash data source - kafka subscribes to kafka's topic to get log message content
  4. Store log message content in es.
    Developers use Kibana to connect to ElasticSeach to query and store log content.

insert image description here

6. Construction of elk+kafka environment

https://gblfy.blog.csdn.net/article/details/123433995

7. SpingBoot integrates kafka Elk

https://gblfy.blog.csdn.net/article/details/123434785

Guess you like

Origin blog.csdn.net/weixin_40816738/article/details/123435082