filebeat + logstash + influxdb + Grafana to create a website log monitoring system

filebeat + logstash + influxdb + Grafana to create a website log monitoring system

 

yexiansen concern

0.822018.02.01 13:13:22 word count 1,093 reading 3,109

filebeat + logstash + influxdb + Grafana to create a website log monitoring system

image

Collect data (fliebeat)-> filter data (logstash)-> store data (InfluxDB)-> display data (Grafana).

With limited resources, I built this service on a CentOS 7 server.

filebeat

Filebeat is a log file shipping tool. After installing the client on your server, filebeat will monitor the log directory or specified log files, track and read these files (track file changes, keep reading), and forward this information Go to elasticsearch or logstarsh.
The following is the workflow of filebeat: When you start the filebeat program, it will start one or more detectors (prospectors) to detect the log directory or file you specify. For each log file found by the detector, filebeat starts Harvesting process (harvester), each harvesting process reads the new content of a log file, and sends these new log data to the processing program (spooler), the processing program will gather these events, and finally filebeat will send the collected data to you specify Location.

A brief description of logstash

Logstash is an open source data collection engine with real-time data transmission capabilities. It can uniformly filter data from different sources, and output to the destination according to the specifications established by the developer.
As the name implies, Logstash collects data objects as log files. Because there are many sources of log files (such as system logs, server logs, etc.), and the content is messy, it is not easy for humans to observe. Therefore, we can use Logstash to collect and uniformly filter log files into highly readable content, which is convenient for developers or operation and maintenance personnel to observe, so as to effectively analyze the performance of the system / project operation and do monitoring and early warning. Preparation work, etc.

influxdb brief

InfluxDB is an open source distributed time series, events and indicators database. Written in Go, no external dependencies are required. Its design goal is to achieve distributed and horizontally scalable expansion.

Brief description of Grafana

Grafana is a full-featured measurement dashboard and graphical editor developed based on JS, a tool to help developers find problems

The relationship between several

filebeat is responsible for collecting newly generated log data and sending it to logstash for data filtering. logstash outputs formatted data to the time series database influxdb. grafana reads the data from the influxdb database and displays it in real time. It monitors the status of the website, such as the number of visits per minute. The number of bytes sent, the situation of 500, etc.

Steps to build a website log monitoring system

1. Nginx server configuration

1.1 The log_format configuration is as follows

log_format  main  '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent  $request_time "$http_referer" "$http_user_agent" "$http_x_forwarded_for"';

1.2 The corresponding log file format is as follows

192.168.154.2 - - [30/Mar/2017:01:27:09 -0700] \"GET /index.html HTTP/1.1\" 304 0 \"-\" \"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.116 Safari/537.36\" \"-\""

2.logstash installation and configuration

2.1 The installation of logstash, my version is logstash-5.6.1-1.noarch

yum install logstash

2.2 The regularity of grok in logstash (added in logstash / vendor / bundle / jruby / 1.9 / gems / logstash-patterns-core-xxx / patterns / grok-patterns file) is:

WZ ([^ ]*)
NGINXACCESS %{IP:remote_ip} \- \- \[%{HTTPDATE:timestamp}\] "%{WORD:method} %{WZ:request} HTTP/%{NUMBER:httpversion}" %{NUMBER:status} %{NUMBER:bytes} %{NUMBER:request_time} %{QS:referer} %{QS:agent} %{QS:xforward}

2.3 Contents of logstash.conf configuration file

input {
    file {
        path     => ["/var/log/nginx/access.log"]
        type    => "nginxlog"
        start_position => "beginning"
    }
}

filter {  
    grok {  
      match => { "message" => "%{NGINXACCESS}" }
    }  
} 
output {

    influxdb {
                     db => "influxdb中的数据库名"
                     host => "localhost"
                     port => "8086"
                     user => "你的账号"
                     password => "你的密码"
             coerce_values => {
                     "request" => "varchar"
                     "status" => "varchar"
               }
                    data_points => {
                                  "request" => "%{request}"
                                  "status" => "%{status}"
                                  "referer"=>"%{referer}"
                                  "agent"=>"%{agent}"
                                  "method"=>"%{method}"
                                  "remote_ip"=>"%{remote_ip}"
                                  "bytes"=>"%{bytes}"
                                  "host"=>"%{host}"
                                  "timestamp"=>"%{timestamp}"
                      
                    }

} 

3.filebeat installation and configuration

3.1 Filebeat installation

My filebeat version is 5.1.1

yum install filebeat

3.2 Filebeat configuration, the configuration file is generally located in / etc / filebeat /

filebeat.prospectors:                                                                              
- input_type: log 
  paths:
    - /var/log/nginx/access.log
output.logstash:
  hosts: ["localhost:5044"] //监控本机的5044端口,可以自己定义需要和logstash配置文件中的beats端口一致

3.3 Stepped pits

The configuration file of logstash is in / etc / logstash /, we need to set up a soft link under / usr / share / logstah / config

ln -s /etc/logstash/* /usr/share/logstash/config/

The configuration file we set in /etc/logstash/conf.d/ also needs to be soft connected to / usr / share / logstash /, which is different from the above

ln -s /etc/logstash/conf.d/logostash.conf /usr/share/logstash/

Logstash 5.0 and above have removed many plug-ins, including ifluxdb plug-in. We need to use gem to install. Link to the following to install Logstash plug-in in the form of Gems package

4. Installation and configuration of influxdb

4.1 Installation of influxdb

My version is influxdb-1.0.2.x86_64.rpm

yum install influxdb

4.2 Influxdb configuration

//管理后台配置,可以根据ip或者域名+端口号8083访问influxdb数据库的管理后台
[admin]
  enabled = true
  bind-address = ":8083"
[http]
  enabled = true
  bind-address = ":8086"
  auth-enabled = false
  log-enabled = true
  write-tracing = false
  pprof-enabled = false                                                                                                                             
  https-enabled = false
  https-certificate = "/etc/ssl/influxdb.pem"
  max-row-limit = 10000
  realm = "InfluxDB"

5. Grafana installation and configuration

5.1 Installation of Grfana

My Grfana version is grafana-4.2.0-1.x86_64.rpm

yum install grafana

5.2 Starting Grafana

service granfan rstart

5.3 Grafana configuration, configuration data source

 

image

5.4 Open the service to run in the background, monitor logs, and display in real time

nohup bin/logstash -f test.conf --path.data=/var/ &

6. Conclusion

After two days of study and various pitfalls, we finally built a very beautiful real-time monitoring interface, which is conducive to our real-time monitoring of the running status of the website.

Published 17 original articles · Like 228 · Visit 330,000+

Guess you like

Origin blog.csdn.net/cxu123321/article/details/105470333