Solve the problem that the @timestamp of filebeat cannot be overwritten by the field of the same name in the json log

The default @timestamp is the timestamp when filebeat reads the log, but we want to display it according to the log generation time when reading the log, so as to locate the problem according to the log generation time point.

This is the format in which I generate the json log:

{"@timestamp":"2017-03-23T09:48:49.304603+08:00","@source":"vagrant-ubuntu-trusty-64","@fields" :{"channel":"xhh.mq.push","level":200,"ctxt_queue":"job_queue2","ctxt_exchange":"","ctxt_confirm_selected":true,"ctxt_confirm_published":true,"ctxt_properties" :{"confirm":true,"transaction":false,"exchange":[],"queue":[],"message":[],"consume":[],"binding_keys":[]," exchange2":{"type":"direct"}}},"@message":"904572:58d31d7ddc790:msg_param1~","@tags":["xhh.






        [0] "xhh.mq.push"
    ],
         "@type" => "xhh.mq.push",
    "input_type" => "log",
        "source" => "/tmp/xhh_mq_20170323.log",
          "type" => "rabbitmq",
       "@fields" => {
                 "ctxt_exchange" => "",
         "ctxt_confirm_selected" => true,
                         "level" => 200,
                       "channel" => "xhh.mq.push",
               "ctxt_properties" => {
                 "confirm" => true,
               "exchange2" => {
                "type" => "direct"
            },
                "exchange" => nil,
                 "consume" => nil,
                 "message" => nil,
             "transaction" => false,
                   "queue" => nil,
            "binding_keys" => nil
        },
                    "ctxt_queue" => "job_queue0",
        "ctxt_confirm_published" => true
    },
          "tags" => [
        [0] "beats_input_raw_event"
    ],
      "@message" => "995428:58d31d7ddc790:msg_param1~",
    "@timestamp" => 2017-03-24T01:00:00.930Z,
          "beat" => {
        "hostname" => "vagrant-ubuntu-trusty-64",
            "name" => "vagrant-ubuntu-trusty-64",
         "version" => "5.2.1"
    },
      "@version" => "1",
          "host" => "vagrant-ubuntu-trusty-64"
} The
time becomes the time when filebeat reads the log, which is not what I want at all, there is no way to find a solution online I found that someone on GitHub's official website is also asking the same question

. In the comments, it is said that grok can be used for conversion, that is, a messageTimestamp field is defined in the log, and then filebeat is pushed to logstash and then converted to the timestamp of logstash through filter configuration. It seems that this is also possible, but there should be an easier solution. the right way. Under the guidance of the almighty Google, it turns out that the latest version of filebeat has solved this problem~ So here it is: https://www.elastic.co/guide/en/beats/filebeat/current/configuration-filebeat-options. html#config-json

Add the following two lines to the filebeat.yml configuration file:

json.keys_under_root: true
json.overwrite_keys:
There are four configuration nodes in json in the true document:

keys_under_root

is FALSE by default, that is, our json log will be placed on the json key after parsing. When set to TRUE, all keys will be placed on the root node.
overwrite_keys

Whether to overwrite the original key is the key configuration. After setting keys_under_root to TRUE, and then setting overwrite_keys to TRUE, the default key value of filebeat can be overwritten.
add_error_key

adds the json_error key key to record the json parsing failure error
message_key

specifies which key to put the json log after parsing. The default is json, and you can also specify it as log, etc.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326508508&siteId=291194637