```
引言:
对于大型web网站一般会需要一个ELK日志分析系统服务,对这个网站运行时,服务器所报出的所有正常和异常的log日志数据进行统计,并进行web网站的日志数据分析,由此本人决定结合项目搭建一个ELK处理日志服务器。
整合步骤:
1.springboot项目pom中添加如下依赖:
<!-- ELK日志分析系统jar包 -->
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-core</artifactId>
<version>1.1.11</version>
</dependency>
<dependency>
<groupId>com.github.danielwegener</groupId>
<artifactId>logback-kafka-appender</artifactId>
<version>0.1.0</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>0.1.5</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>5.1</version>
</dependency>
2.修改springboot项目中logback配置,新增内容如下:
<appender name="KafkaAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
<encoder class="com.github.danielwegener.logback.kafka.encoding.LayoutKafkaMessageEncoder">
<layout class="net.logstash.logback.layout.LogstashLayout" >
<includeContext>true</includeContext>
<includeCallerData>true</includeCallerData>
<customFields>{"system":"test"}</customFields>
<fieldNames class="net.logstash.logback.fieldnames.ShortenedFieldNames"/>
</layout>
<charset>UTF-8</charset>
</encoder>
<!--kafka topic 需要与配置文件里面的topic一致 否则kafka会沉默并鄙视你-->
<topic>mcloud-log</topic>
<keyingStrategy class="com.github.danielwegener.logback.kafka.keying.HostNameKeyingStrategy" />
<deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />
<producerConfig>bootstrap.servers=127.0.0.1:9092</producerConfig>
</appender>
<!--你可能还需要加点这个玩意儿-->
<logger name="Application_ERROR">
<appender-ref ref="KafkaAppender"/>
</logger>
<!--还有这个玩意儿-->
<root>
<level value="INFO" />
<appender-ref ref="CONSOLE" />
<appender-ref ref="KafkaAppender" />
</root>
3.部署配置logstash,步骤如下:
1.官网下载logstash https://www.elastic.co/cn/downloads/logstash (对应版本与ElasticSearch版本一致)
2.添加logstash.conf启动配置文件
下载好的logstash解压在E目录,进入到 E:\logstash-5.6.1\config 目录下 ,新增logstash.conf配置文件,内容如下:
input{
#kafka服务器连接配置
kafka{
bootstrap_servers => "127.0.0.1:9092"
topics => "mcloud-log"
}
}
output{
#ES服务器连接配置
elasticsearch{
hosts => ["127.0.0.1:9200"]
index => "logstash-%{type}-%{+YYYY.MM.dd}"
flush_size => 20000
idle_flush_time => 10
template_overwrite => true
}
}
注意:yml和conf中不要用TAB,用空格,不然启动会报错
启动logstash
#DOS指令进去E:\logstash-5.6.1\bin目录下,键入如下指令,启动logstash服务器
logstash -f logstash.conf
4.部署配置kinaba,步骤如下:
1.官网下载kinaba https://www.elastic.co/cn/downloads/kibana -> Not the version you're looking for? View past releases.(对应版本与ElasticSearch版本一致)
2.修改kibana.yml配置文件
下载好的kinaba解压在E目录,进入到 E:\kibana-5.6.1-windows-x86\config 目录下,修改kibana.yml配置文件内容,修改内容如下:
server.port: 5601
server.name: "kibana"
server.host: "127.0.0.1"
elasticsearch.url: "http://127.0.0.1:9200"
启动服务
#DOS指令进去E:\kibana-5.6.1-windows-x86目录下,键入如下指令,启动kinaba服务器
bin\kibana.bat
提示:由于ElasticSearch和Kafka环境部署在本人其他博客文章中有写过,这里就不在做配置
```
参考博客:
https://blog.csdn.net/yy756127197/article/details/78873310
https://blog.csdn.net/yy756127197/article/details/78873310