(21) use of go-micro microservice logstash

A Logstash introduction

  • Logstash is an open source data collection engine with real-time pipeline capabilities. Logstash can dynamically unify data from different sources and normalize the data to a target output of your choice. It provides a large number of plugins that help us parse, enrich, transform and buffer any kind of data.

  • Logstash is often used as a log collection device in the log system, and is most commonly used as a log collector in ELK

Two Logstash role

  • Centralize, transform and store your data, an open source server-side data processing pipeline that can fetch data from multiple data sources simultaneously, transform it, and send it to your favorite storage

Three Logstash working principles

  • The pipeline (Logstash Pipeline) is an independent operating unit in Logstash. Each pipeline contains two required elements input (input) and output (output), and an optional element filter (filter). The event processing pipeline is responsible for coordination their execution. Input and output support codecs, allowing you to encode or decode data as it enters or exits the pipeline without having to use separate filters.
    image.png

  • Working principle of Logstash (three stages: inputs→filters→outputs)

    • Input (data input phase): will input data to logstash

    • Filters (data cleaning): data intermediate processing, data manipulation

    • Outputs (data output): outputs are the final components of the logstash processing pipeline

  • Common inputs in the Logstash-Input phase

    • file: read from a file in the file system, similar to the tail -f command

    • syslog: Listen to system log messages on port 514 and parse them according to the RFC3164 standard

    • beats : read from Filebeat

  • Logstash-Filter data middleware processing plug-in grok (can parse arbitrary text data)

    • GROK basic syntax such as: %{SYNTAX:SEMANTIC}

    • SYNTAX: represents the type of matching value

    • SEMANTIC: Represents the name of a variable that stores the value

    • 例: %{ERROR|DEBUG |INFO |WARN:1og_level}

  • Logstash-Output data output

    • Output to kafka and ES can also be output to redis

image.png

  • codec

    • Codecs are basic stream filters that can operate as part of input or output. Codecs enable you to easily separate the transmission of messages from the serialization process. Popular codecs include json, msgpack, and plain (text).

    • json: Encode or decode data in JSON format.

    • multiline: Merges multiline text events such as java exceptions and stacktrace messages into a single event.

  • execution model

    • The Logstash event processing pipeline coordinates the execution of inputs, filters, and outputs.

    • Each input stage in a Logstash pipeline runs in its own thread. Input writes events to a central queue located in memory (default) or on disk. Each pipeline worker thread pulls a batch of events from this queue, via a configured The filter runs a batch of events and then runs the filtered events through any output, the size of the batch and the number of pipeline worker threads can be configured.

Four Logstash installation

1. Pull the image

docker pull logstash

2. Run the command

docker run -d --name logstash -p 5044:5044 -p 5000:5000 -p 9600:9600 logstash

3. Check whether it is running

docker ps

Five Logstash uses

  • In order to facilitate the use of ELK at the same time, create a new directorydocker-elk

  • Create a new one in the docker-elk directorylogstash/logstash.yml

  • In logstash.yml, enter the following code:

---
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]

xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: pwd
  • Create a new one in the docker-elk directorylogstash/logstash.conf
  • In logstash.conf, enter the following code:
input {
    beats {
        port => 5044
    }
    tcp {
        port => 5000
    }
}

output {
    elasticsearch {
        hosts => "elasticsearch:9200"
        user => "elastic"
        password => "pwd"
        index => "%{[@metadata][-imooc]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    }
}
  • Create a docker-stack.yml in the docker-elk directory and start ELK at the same time

  • Enter the following code:

version: '3.3'
services:
  logstash:
    image: logstash:latest
    ports:
      - "5044:5044"
      - "5000:5000"
      - "9600:9600"
    volumes:
      - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
      - ./logstash/pipeline/logstash.conf:/usr/share/logstash/pipeline/logstash.conf
    environment:
      LS_JAVA_OPTS: "-Xmx256m -Xms256m"
  • So far, logstash has been used so far

six last

  • So far, the use of logstash in the go-micro microservice project has been officially completed.

  • Next, I will start writing the code used by kibana. I hope everyone will pay attention to bloggers and columns, and get the latest content as soon as possible. Every blog is full of dry goods.

Guess you like

Origin blog.csdn.net/weixin_53795646/article/details/128764468