ELK log file analysis system, Logstash, ElasticSearch and Kiabana three open source tools (example deployment process)

1. Introduction of ELK Log File System

ELK is an open source architecture for log system management, consisting of three open source software: Logstash, ElasticSearch and Kiabana. It can analyze and visualize statistics and display for any source and any type of log file.

Two, ElasticSearch cluster installation and construction

ElasticSearch cluster building address: click to go

Introduction to package installation node
Quantity CPU name address version Installed software
The first node1 192.168.10.5 Linux7.4 64 bit ElasticSearch cluster
Second station node2 192.168.10.6 Linux7.4 64 bit ElasticSearch cluster, Kiabana
Third station apache 192.168.10.7 Linux7.4 64 bit Logstash

ELK builds on the ElasticSearch cluster built in the previous article

Third, the introduction and installation of Logstash tools

The package must be downloaded at the address

ElasticSearch cluster building address: click to go

1. Introduction to Logstash tool

Logstash is a powerful data processing tool that can realize data transmission, format processing, formatted output, and powerful plug-in functions, often used for log processing.

  • The philosophy of Logstash is very simple, it only does 3 things:
  1. Collect: Data input
  2. Enrich: Data processing, such as filtering, rewriting, etc.
  3. Transport: data output

The main components of LogStash:

  • Shipper: Log collector. Responsible for monitoring the changes of local log files and collecting the latest content of log files in time. Usually, the remote agent (agent) only needs to run this component;
  • Indexer: The log store. Responsible for receiving logs and writing to local files.
  • Broker: Log Hub. Responsible for connecting multiple Shippers and multiple Indexers
  • Search and Storage: allows searching and storing events;
  • Web Interface: Web-based display interface
  • It is precisely because the above components can be deployed independently in the LogStash architecture, it provides better cluster scalability

LogStash host classification:
1) Agent host: As the shipper of the event, it sends various log data to the central host; only needs to run the Logstash agent program;
2) Central host :It can run various components including Broker, Indexer, Search and Storage, and Web Interface to realize the reception, processing and storage of log data

2. Install the Logstash tool to collect apache service logs and system logs

1) Install losgstash and apache service and start

Turn off the firewall and Selinux

[root@localhost ~]# hostnamectl set-hostname apache
[root@localhost ~]# su
[root@apache ~]# yum -y install httpd
[root@apache ~]# systemctl start httpd
[root@apache ~]# java -version
openjdk version "1.8.0_131"
OpenJDK Runtime Environment (build 1.8.0_131-b12)
OpenJDK 64-Bit Server VM (build 25.131-b12, mixed mode)
\\上传软件包
[root@apache ~]# rpm -ivh logstash-5.5.1.rpm     
[root@apache ~]# systemctl start logstash.service    
[root@apache ~]# systemctl enable logstash.service
[root@apache ~]# ln -s /usr/share/logstash/bin/logstash /usr/local/bin/  
 //链接logstash命令到环境变量中
2. Logstash command test and upload system log and apache log to ElasticSearch cluster
1) Logstash command test

[root@apache opt]# logstash -e'input {stdin{}} output {stdout {} }'
parameter explanation:

  • -f This option allows you to specify the logstash configuration file, and reload the logstash service according to the configuration file
  • -e followed by a string, the string can be used as the configuration of logstash (if it is "", stdin is used as input and stdout is used as output by default)
    stdin{} and stdout{} input uses standard input and output uses standard output

18:42:57.986 [Api Webserver] INFO logstash.agent-Successfully started Logstash API endpoint {:port=>9601}
www.wodejia.com //Manual standard input
2020-10-29T10:43:14.261Z apache www.wodejia .com //Convert standard output


[root@apache ~]# logstash -e'input {stdin{}} output {stdout{ codec=>rubydebug} }'
······················
18:46:34.205 [Api Webserver] INFO logstash.agent-Successfully started Logstash API endpoint {:port=>9601}
www.wodejia.com //Manual standard input
{ "@timestamp" => 2020-10-29T10:46 :41.844Z, “@version” => “1”, “host” => “apache”, // Convert open programming language and output “message” => “www.wodejia.com” }





2) Collect system logs and apache logs and upload them to the ElasticSearch cluster
[root@apache ~]# cd /etc/logstash/conf.d
[root@apache ~]# chown o+r /var/log/messages  //允许其他用户读取
[root@apache conf.d]# vim system.conf       //上传系统日志配置文件
input {                                  //输入
file{                                         //类型为文件
path => "/var/log/messages"                  //路径
type => "system"                       //类型自定义
start_position => "beginning"       //从开始得位置上传
}
}
output {                                     //输出到
elasticsearch {                                 //转换的服务类型
hosts => ["192.168.10.5:9200"]        //服务得地址名
index => "system-%{+YYYY.MM.dd}"    //创建的索引名和索引格式
}
}
[root@apache conf.d]# vim apache.conf      //编辑apache服务上传日志
input {
file{
path => "/etc/httpd/logs/access_log"
type => "access"
start_position => "beginning"
}
file{
path => "/etc/httpd/logs/error_log"
type => "error"
start_position => "beginning"
}

}
output {
if [type] == "access" {
elasticsearch {
hosts => ["192.168.10.6:9200"]
index => "apache_access-%{+YYYY.MM.dd}"
     }
}
if [type] == "error" {
elasticsearch{
hosts => ["192.168.10.6:9200"]
index => "apache_error-%{+YYYY.MM.dd}"
         }
     }
}
[root@apache conf.d]# systemctl restart logstash.service  //重启服务
3) Test whether the upload is successful

Insert picture description here

  • If the apache log is not successful, you can visit the server's http service in the browser. Or restart the httpd service, and successfully access the service once in the browser, otherwise the log of the successful apache access is empty and will not be uploaded.
  • [root@apache conf.d]# logstash -f /etc/logstash/conf.d/apache.conf //Reload the configuration file

Four, Kiabana tool introduction and installation

1. Introduction to Kiabana Tools

Kibana is an open source analysis and visualization platform for Elasticsearch, used to search and view interactive data stored in the Elasticsearch index.

  • Kibana can perform advanced data analysis and display through various charts.
  • Kibana makes huge amounts of data easier to understand. It is easy to operate, and the browser-based user interface can quickly create a dashboard (dashboard) to display Elasticsearch query dynamics in real time

The main functions of Kibana:
1. Seamless Elasticsearch integration. The Kibana architecture is customized for Elasticsearch and can add any structured and unstructured data to the Elasticsearch index. Kibana also takes full advantage of Elasticsearch's powerful search and analysis capabilities.

2. Integrate your data. Kibana can better handle massive amounts of data and create column charts, line charts, scatter charts, histograms, pie charts, and maps accordingly.

3. Complex data analysis. Kibana has improved the analysis capabilities of Elasticsearch, able to analyze data more intelligently, perform mathematical transformations, and segment data as required.

4. Let more team members benefit. The powerful database visualization interface allows all business positions to benefit from data collection.

5. Flexible interface. Sharing is easier. Use Kibana to create, save, and share data more conveniently, and quickly communicate visualized data.

6. Simple configuration. Kibana is very simple to configure and enable, and the user experience is very friendly. Kibana comes with a web server, which can be quickly up and running.

7. Visualize multiple data sources. Kibana can easily integrate data from Logstash, ES-Hadoop, Beats or third-party technologies into Elasticsearch. The supported third-party technologies include Apache Flume, Fluentd, etc.

8. Simple data export. Kibana can easily export the data of interest, merge it with other data collections, and quickly model and analyze it to discover new results

2. Installation and configuration of Kibana

//上传kibana-5.5.1-x86_64.rpm软件包
[root@node1 ~]# rpm -ivh kibana-5.5.1-x86_64.rpm
[root@node1 ~]# cd /etc/kibana/
[root@node1 kibana]# cp kibana.yml kibana.yml.bak   //备份配置文件
[root@node1 kibana]# vi kibana.yml       //修改配置文件
server.port: 5601     //kibana侦听的端口   
server.host: "0.0.0.0"   //kibana侦听的地址
elasticsearch.url: "http://192.168.100.5:9200”   //和elasticsearch建立联系
 kibana.index: ".kibana"      //在elasticsearch中添加.kibana索引
[root@node1 kibana]# systemctl start kibana.service  //启动kibana服务
[root@node1 kibana]# systemctl enable kibana.service  //开机启动kibana服务

3. Use a browser to check whether Kibana is created successfully

Insert picture description here

4. Add the index log in ElasticSearch to Kiabana for statistical analysis

Insert picture description here

  • See detailed statistical analysis and display information
    Insert picture description here

Guess you like

Origin blog.csdn.net/wulimingde/article/details/109362124