Article Directory
- 1, ELK Introduction
- 2, the environment, the software is ready
- 3, ELK environment to build
- 4, Spring Boot Configuration Example
1, ELK Introduction
ELK is Elasticsearch, Logstash, Kibana abbreviation, Elasticsearch is an open source distributed search engine that provides collection, analysis, data storage and other functions, Logstash is mainly used to collect log analysis, log filtering tools, Kibana provide analysis and Elasticsearch visual Web platform, can be found in Elasticsearch index, interaction data, and generates various dimensions of the table of FIG. Bottom line: log collection (Logstash), a data storage (elasticsearch), visualized log (Kibana), combine the three very convenient centralized processing log.
2, the environment, the software is ready
The demo environment, I was on the MAC OS operating this machine, the following are installed software and version:
- Java: 1.8.0_211
- Elasticsearch: 7.1.0
- Logstash: 7.1.0
- Kibana : 7.1.0
- Spring Boot: 2.1.4.RELEASE
Note: The main demonstration of how to configure Spring-Boot project Log4j2
and Logback
output logs to the ELK in, and can Kibana
be correctly retrieved in, Elasticsearch
and Spring-Boot project underlying need Java environment, it is necessary to advance locally installed Java environment, where ignore Java installation process.
3, ELK environment to build
We can go to the official website to separately download the latest version of the system corresponds Elasticsearch
, , Logstash
, Kibana
up to now has been updated to 7.1.0
version, the installation package is available at the following link and provide a default configuration startup procedure.
Here I would like to start with the default configuration of Elasticsearch
the service, the boot is completed, by local http://127.0.0.1:9200
powers on the address to access the service. Note: to not start Logstash
, and Kibana
because they need to change the configuration, will be mentioned below.
4, Spring Boot Configuration Example
Use Idea create a Spring Boot project, let's add Log4j2
support, demonstrates how to use Log4j2
the log output directly to the local ELK, and then the next presentation by Logback
dynamic output index name to the log, convenient category search logs.
4.1, Log4j2 way configuration
First of all modifications pom.xml
to increase Log4j2
logging framework support, pay attention to spring-boot-starter
the default use Logback
as a logging framework, so you need to remove the default logging configuration spring-boot-starter-logging
.
pom.xml
Add the following configuration:
<dependency>
<groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-logging</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-log4j2</artifactId> </dependency> <dependency> <groupId>com.lmax</groupId> <artifactId>disruptor</artifactId> <version>3.4.2</version> </dependency>
Note: disruptor
is a lightweight, high-performance concurrent framework, Log4j2
comprising based on LMAX Disruptor
the next-generation (high-performance inter-thread messaging library) Asynchronous Loggers
. In a multithreaded environment Asynchronous Loggers
throughput is Log4j1
and Logback
18 times, the time delay should be an order of magnitude. If you use asynchronous log, add disruptor
support, will greatly improve efficiency, of course, do not add is no problem.
Increased log4j2-spring.xml
configured to output the ELK probably configured as follows:
< ? XML Version = "1.0" encoding = "UTF-. 8" ? > <The Configuration Status = "OFF" MonitorInterval = "60" > <Appenders > < ! - Console log, only the output level and higher level information, and log output level arrangement of the color - > <console name = "console" target = "SYSTEM_OUT" > < ! - the console just above the level of the output level and the information (onMatch), other direct rejection (onMismatch) - > <ThresholdFilter Level = "info" onMatch = "ACCEPT"onMismatch="DENY"/> <PatternLayout pattern="%highlight{%d{yyyy.MM.dd 'at' HH:mm:ss z} %-5level %class{36} %M() @%L - %msg%n}{FATAL=Bright Red, ERROR=Bright Magenta, WARN=Bright Yellow, INFO=Bright Green, DEBUG=Bright Cyan, TRACE=Bright White}"/> </Console> <!-- socket 日志,输出日志到 Logstash 中做日志收集 --> <Socket name="Socket" host="127.0.0.1" port="4560" protocol="TCP"> <JsonLayout properties="true" compact="true" eventEol="true" /> <PatternLayout pattern="%d{yyyy.MM.dd 'at' HH:mm:ss z} %-5level %class{36} %M() @%L - %msg%n"/> </Socket> </Appenders> <Loggers> <Root level="INFO"> <appender-ref ref="Socket"/> <appender-ref ref="Console"/> </Root> </Loggers> </Configuration>
Note: This configuration Socket output, output logs to the local Logstash
do log collection, automatically output to the local post-formatting process Elasticsearch
is stored, and finally through Kibana
the page shows up to retrieve the index over the Web. You need to specify host
, , port
, protocol
here to configure the cost of Logstash
designated ports and addresses can be configured.
At the same time can be application.properties
configured log output level, pay attention to where you can not load the specified log4j2-spring.xml
file, Spring Boot load the default configuration file.
logging.level.root=info
Finally, the code writes some specific log and exception information in the Controller, convenient Kibana
viewing verification.
@RestController
@RequestMapping("/test")
public class LogController { private Logger logger = LogManager.getLogger(LogController.class); @RequestMapping(value = "/log4j2", method = RequestMethod.GET) public String testLog(){ try { logger.info("Hello 这是 info message. 信息"); logger.error("Hello 这是 error message. 报警") ; Logger . The warn ( "the Hello This is warn message warning." ) ; Logger . Debug ( "the Hello This is a debug message debugging." ) ; Logger . Fatal ( "the Hello This is a fatal message seriously." ) ; List <String > List = new new the ArrayList < > ( ) ; the System .out . the println (List . GET ( 2 ) ) ; } the catch ( Exception E ) {Logger .error("testLog", e); } return ""; } }
OK, Spring Boot project to add Log4j2
support configuration is completed, then, we need to configure Logstash
and Kibana
, in Logstash
the installation directory under config directory, create test-log4j2.conf
profiles, configuration is as follows:
input {
tcp {
host => "127.0.0.1"
port => "4560" mode => "server" type = json } stdin {} } filter { } output { stdout { codec => rubydebug } elasticsearch { hosts => ["127.0.0.1:9200"] action => "index" codec => rubydebug index => "log4j2-%{+YYYY.MM.dd}" } }
Note: This configuration host and port tcp keep on top of log4j2-spring.xml
the configuration, or it may not be able to log collection, output at elasticsearch the host configuration to keep the top local startup Elasticsearch
configuration is consistent, index specified as a fixed log4j2-yyyy.MM.dd
format, convenient Kibana
to retrieve index use. Use this configuration file to start Logstash
the following command:
$ cd <Logstash_path>/bin
$ ./logstash -f ../config/test-log4j2.conf
Finally, start Kibana
, you can use the default configuration, but here I have slightly modified some configuration is as follows:
$ vim <Kibana_path>/config/kibana.conf
server.port: 5601 server.host: "127.0.0.1" elasticsearch.hosts: ["http://127.0.0.1:9200"] i18n.locale: "zh-CN"
Similarly, this elasticsearch.hosts
configuration to keep on top of the same, specifying the Kibana
start port 5601
, and modify the default English to Chinese display for easy viewing, start Kibana
until the log output displays the status Green
that is boot up.
Everything is ready, start the final Spring Boot engineering, and trigger /test/log4j2
interfaces, manufacturing various types of logs, the Kibana
Web page to see if the correct load over it!
Browser Access http://127.0.0.1:4560
to open Kibana
page, first of all we look at Elasticsearch
the index management which, if configured on top of an existing log4j2-yyyy.MM.dd
format index.
OK, show already exists, then the next we Kibana
create an index mode Index mode, the input log4j2-*
can be correctly matched to Elasticsearch
the specified index, followed by the field name selected at the time of screening @timestamp
to help us filter the data by time period behind the creation process as follows:
Created, we can Kibana
filter and display the log, for example, I added a message
field, after filtering, on the show for various types of logs and exception log top engineering sample code, very easy and intuitive!
4.2, Logback way configuration
Top use Log4j2
logging framework can be output to a log ELK correct, but there is a place we need to pay attention to, is to start Logstash
specified when Elasticsearch
the index index is a fixed value ( log4j2-*
), and if there are multiple simultaneous project to ELK output log, using the same index name it, will cause the log chaos, convenient to distinguish between investigation and log each item, so we hope to be able to pass the dynamic output index name Elasticsearch
, such as project a is output to the index projectA-*
lower, project B output to the index projectB-*
lower, so help us in Kibana
under the matching index value corresponding to the project in accordance with, of course, the use of Spring Boot default logging framework Logback
can easily do so.
那么改造项目来支持 Logback
日志框架,首先修改 pom.xml
配置文件如下:
<dependency>
<groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <!-- <exclusions>--> <!-- <exclusion>--> <!-- <groupId>org.springframework.boot</groupId>--> <!-- <artifactId>spring-boot-starter-logging</artifactId>--> <!-- </exclusion>--> <!-- </exclusions>--> </dependency> <dependency> <groupId>net.logstash.logback</groupId> <artifactId>logstash-logback-encoder</artifactId> <version>5.3</version> </dependency>
注意:这里将移除的 spring-boot-starter-logging
重新添加进来,同时依赖 logstash-logback-encoder
该插件,该插件起到 Socket 通过 TCP 方式向 Logstash
进行日志输送的作用。
接着增加 logback-spring.xml
配置文件如下:
<?xml version="1.0" encoding="UTF-8"?> <configuration debug="false"> <property name="LOG_HOME" value="logs/demo.log" /> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder"> <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern> </encoder> </appender> <appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender"> <destination>127.0.0.1:4560</destination> <encoder class="net.logstash.logback.encoder.LogstashEncoder" > <customFields>{"appname": "demo-elk"}</customFields> </encoder> </appender> <root level="INFO"> <appender-ref ref="STDOUT" /> <appender-ref ref="logstash" /> </root> </configuration>
注意:这里的 destination 依旧要配置对应上本地 Logstash
配置,着重说下 <customFields>{"appname": "demo-elk"}</customFields>
字段配置,该自定义字段配置, Logstash
收集日志时,每条日志记录均会带上该字段,而且在 Logstash
配置文件中可以通过变量的方式获取到字段,这样就能达到我们说的动态输出索引名称到 Elasticsearch
中的功能了。同样,application.properties
可以不指定加载 logback-spring.xml
文件,Spring Boot 会默认加载该配置文件。接下来,去 Logstash
config 下增加 test-logback.conf
配置文件,配置如下:
input {
tcp {
host => "127.0.0.1"
port => "4560" mode => "server" type => json } stdin {} } filter { } output { stdout { codec => rubydebug } elasticsearch { hosts => ["127.0.0.1:9200"] action => "index" codec => rubydebug index => "%{[appname]}-%{+YYYY.MM.dd}" } }
注意:这里的 %{[appname]}
就是获取上边的 <customFields>
字段中的 json 串 key 值,我们只传了一个 appname 值,当让还可以传递其他值,例如 IP、Hostname 等关键信息,方便在 Kibana
中检索索引时区分。重启一下 Logstash
指定该配置文件。
$ cd <Logstash_path>/bin
$ ./logstash -f ../config/test-logback.conf
Elasticsearch
和 Kibana
不需要重启,再次启动 Spring Boot 工程,去 Kibana 下查看 !查看下 Elasticsearch
索引管理里面,是否已存在上边配置的 demo-elk-yyyy.MM.dd
格式索引。
What? 怎么没有获取到传递过去的 appname 值呢?原样配置到 Elasticsearch
索引中去了,但是我在后台 Logstash
控制台日志中可以明显看到,打印的每条 Json 串中是有该字段的呀!各种搜索,发现大家也是这么配置的呢!即使将 type => json
改为 codec => json
依旧不行!百思不得解的时候,查看了下 logstash-logback-encoder 文档说明 这里明确指出要使用 codec => json_lines
方式,好吧! test-logbash.conf
配置修改如下并重启 Logstash
。
input {
tcp {
host => "127.0.0.1"
port => "4560" mode => "server" codec => json_lines } stdin {} } filter { } output { stdout { codec => rubydebug } elasticsearch { hosts => ["127.0.0.1:9200"] action => "index" index => "%{[appname]}-%{+YYYY.MM.dd}" } }
这下妥妥没有问题了,在去查看下 Elasticsearch
索引管理,这下就有了。
那么接着建一个索引模式名称为 demo-elk-*
,查看下日志记录,是否能够正常加载的项目日志,也是妥妥没有问题的。