FileBeat 解决时间冲突问题

版本:FileBeat 6.3


问题

项目中配置了生成JSON格式的日志,方便导入到ELK中,生成格式如下:

{
    "@timestamp":"2018-06-29T16:24:27.555+08:00",
    "severity":"INFO",
    "service":"osg-sender",
    "trace":"",
    "span":"",
    "parent":"",
    "exportable":"",
    "pid":"4620",
    "thread":"restartedMain",
    "class":"o.s.c.a.AnnotationConfigApplicationContext",
    "rest":"Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@7a7df24c: startup date [Fri Jun 29 16:24:27 CST 2018]; root of context hierarchy"
}

然后再通过FileBeat直接将json文件的每一行导入到ElasticSearch中,但是FileBeat会自动生成 @timestamp 字段,代表导入时间,这样就和LOG中的该字段冲突了。

解决办法

两种方式:

  1. 将LOG文件的@timestamp字段换个名字,比如 logDate,避免和FileBeat中的冲突,此时要为logDate在FileBeat的 fields.yml中添加索引字段配置,添加类型为日期的logDate字段,否则在Kibana中创建index pattern时无法看到该time filter。
  2. 不改变LOG内容,直接修改FileBeat的 fields.yml,把他原来的 @timestamp 字段改个名字,并增加 @timestamp 字段为日志使用。

我使用的第二种方式,修改fields.yml

- key: beat
  title: Beat
  description: >
    Contains common beat fields available in all event types.
  fields:

    - name: beat.name
      description: >
        The name of the Beat sending the log messages. If the Beat name is
        set in the configuration file, then that value is used. If it is not
        set, the hostname is used. To set the Beat name, use the `name`
        option in the configuration file.
    - name: beat.hostname
      description: >
        The hostname as returned by the operating system on which the Beat is
        running.
    - name: beat.timezone
      description: >
        The timezone as returned by the operating system on which the Beat is
        running.
    - name: beat.version
      description: >
        The version of the beat that generated this event.

    - name: "@timestamp-beat"
      type: date
      required: true
      format: date
      example: August 26th 2016, 12:35:53.332
      description: >
        The timestamp when the event log record was generated.

    - name: "@timestamp"
      type: date
      format: "yyyy-MM-dd'T'HH:mm:ss.SSSZZ"

将原来的@timestamp改成了@timestamp-beat,增加了新的@timestamp并指定了日期格式。

这样在创建index pattern的时候,就能看到两个时间过滤器了,一个是日志生成的时间,一个是filebeat导入的时间。


632093-c4570628b226db69.png
image.png

配置

贴一下项目中log的配置文件,以及FileBeat的配置
logback-spring.xml

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <include resource="org/springframework/boot/logging/logback/defaults.xml"/>
  ​
  <springProperty scope="context" name="springAppName" source="spring.application.name"/>
  <!-- Example for logging into the build folder of your project -->
  <property name="LOG_FILE" value="${BUILD_FOLDER:-log}/${springAppName}"/>​

  <!-- You can override this to have a custom pattern -->
  <property name="CONSOLE_LOG_PATTERN" value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}"/>

  <!-- Appender to log to console -->
  <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
    <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
      <!-- Minimum logging level to be presented in the console logs-->
      <level>DEBUG</level>
    </filter>
    <encoder>
      <pattern>${CONSOLE_LOG_PATTERN}</pattern>
      <charset>utf8</charset>
    </encoder>
  </appender>

  <!-- Appender to log to file -->​
  <appender name="flatfile" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>${LOG_FILE}</file>
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
      <fileNamePattern>${LOG_FILE}.%d{yyyy-MM-dd}.gz</fileNamePattern>
      <maxHistory>90</maxHistory>
    </rollingPolicy>
    <encoder>
      <pattern>${CONSOLE_LOG_PATTERN}</pattern>
      <charset>utf8</charset>
    </encoder>
  </appender>
  ​
  <!-- Appender to log to file in a JSON format -->
  <appender name="logstash" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>${LOG_FILE}.json</file>
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
      <fileNamePattern>${LOG_FILE}.json.%d{yyyy-MM-dd}</fileNamePattern>
      <maxHistory>90</maxHistory>
    </rollingPolicy>
    <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
      <providers>
        <timestamp>
          <!--<fieldName>logDate</fieldName>-->
          <!--<pattern>yyyy-MM-dd HH:mm:ss.SSS</pattern>-->
        </timestamp>
        <pattern>
          <pattern>
            {
            "severity": "%level",
            "service": "${springAppName:-}",
            "trace": "%X{X-B3-TraceId:-}",
            "span": "%X{X-B3-SpanId:-}",
            "parent": "%X{X-B3-ParentSpanId:-}",
            "exportable": "%X{X-Span-Export:-}",
            "pid": "${PID:-}",
            "thread": "%thread",
            "class": "%logger{40}",
            "rest": "%message"
            }
          </pattern>
        </pattern>
        <stackTrace>
          <throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
            <maxDepthPerThrowable>30</maxDepthPerThrowable>
            <maxLength>2048</maxLength>
            <shortenedClassNameLength>20</shortenedClassNameLength>
            <exclude>^sun\.reflect\..*\.invoke</exclude>
            <exclude>^net\.sf\.cglib\.proxy\.MethodProxy\.invoke</exclude>
            <rootCauseFirst>true</rootCauseFirst>
          </throwableConverter>
        </stackTrace>
      </providers>
    </encoder>
  </appender>
  ​
  <root level="INFO">
    <appender-ref ref="console"/>
    <!-- uncomment this to have also JSON logs -->
    <appender-ref ref="logstash"/>
    <appender-ref ref="flatfile"/>
  </root>
</configuration>

filebeat.yml

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /root/ogter/build/*.json.*
    - /root/ogter/log/*.json
  exclude_files: ['.gz$']
  json.keys_under_root: true
  json.overwrite_keys: true
  exclude_files: ['.gz$']
  
output.elasticsearch:
  hosts: ["192.168.1.17:9200"]

猜你喜欢

转载自blog.csdn.net/weixin_34185364/article/details/87232802