[Linux operation and maintenance architecture] ------ build ELK log analysis system

1. Introduction to ELK log analysis system:

Log server:

  • improve security;
  • centrally store logs;
  • Defect: Difficult to analyze the log

ELK log processing steps:

  • Centrally format logs;
  • Format the log (logstash) and output it to Elasticsearch;
  • Index and store formatted data (Elasticsearch);
  • Display of front-end data (Kibana)

insert image description here

ELK:Elasticsearch + Logstash + Kibana

ELK is the abbreviation of Elasticsearch, Logstash, and Kibana. These three are the core suites, but not all of them.

(1) Elasticsearch is a real-time full-text search and analysis engine that provides three major functions of collecting, analyzing, and storing data; it is a scalable distributed system that provides efficient search functions with an open REST and JAVA API structure. It is built on top of the Apache Lucene search engine library.

(2) Logstash is a tool for collecting, analyzing, and filtering logs. It supports almost any type of log, including system logs, error logs, and custom application logs. It can receive logs from many sources, including syslog, messaging (such as RabbitMQ), and JMX, and it can output data in a variety of ways, including email, websockets, and Elasticsearch.

(3) Kibana is a web-based graphical interface for searching, analyzing, and visualizing log data stored in Elasticsearch indicators. It leverages Elasticsearch's REST interface to retrieve data, allowing users not only to create custom dashboard views of their own data, but also to query and filter data in ad-hoc ways.

Function:

  • Elasticsearch seamless integration;
  • Integration of data, complex data analysis;
  • Flexible interface, easier to share;
  • Simple configuration, visualize multiple data sources;
  • Simple data export;
  • Let more team members benefit.

2. Build the ELK log analysis system:

Role main software
node1(192.168.220.136) Elasticsearch、Kibana
node2(192.168.220.137) Elasticsearch、Kibana
Apache (192.168.220.135) Logstash

Step 1: Configure the elasticsearch environment first

(1) Modify the two host names, namely: node1 and node2

(2) Modify the hosts file:

vim /etc/hosts
添加以下主机名和主机IP地址(两台node都需要):
192.168.220.129 node1
192.168.220.130 node2

(3) Firewalls are closed

systemctl stop firewalld.service 
setenforce 0

Step 2: Deploy and install elasticsearch software (both nodes are required)

(1) Installation:

rpm -ivh elasticsearch-5.5.0.rpm        //安装
systemctl daemon-reload                 //重新加载服务配置文件
systemctl enable elasticsearch.service   //设置为开机自启动
cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.bak

(2) Modify the cooperation file:

Note: The configuration of the second node server is the same as the first one, just modify the node name and ip address.

vim /etc/elasticsearch/elasticsearch.yml

cluster.name: my-elk-cluster           //集群名字(自定义)
node.name: node-1                      //节点名字
path.data: /data/elk_data               //数据存放路径
path.logs: /var/log/elasticsearch/     //日志存放路径
bootstrap.memory_lock: false           //不在启动的时候锁定内存
network.host: 192.168.220.136          //提供服务绑定的IP地址(本机地址)
http.port: 9200            //端口
discovery.zen.ping.unicast.hosts: ["node1", "node2"]  //集群发现通过单播实现

(3) Create a data storage path and authorize:

mkdir -p /data/elk_data
chown elasticsearch:elasticsearch /data/elk_data/

(4) Start the service:

systemctl start elasticsearch.service
netstat -natp | grep 9200

insert image description here

1. Enter the following URL in the browser to check the health status of the cluster:

http://192.168.220.136:9200/_cluster/health?pretty

insert image description here

2. Check the cluster status information:

http://192.168.220.136:9200/_cluster/state?pretty

insert image description here

Step 3: Install the elasticsearch-head plugin

(1) Install dependent packages:

yum install gcc gcc-c++ make -y

(2) Compile and install node components:

tar zvxf node-v8.2.1.tar.gz -C /opt/
cd /opt/node-v8.2.1/
./configure 
make -j3     //这步耗时较长,耐心等待
make install

(3) Install phantomjs front-end framework:

tar jxvf phantomjs-2.1.1-linux-x86_64.tar.bz2 -C /opt/
cd phantomjs-2.1.1-linux-x86_64/bin
cp phantomjs /usr/local/bin/

(4) Install the elasticsearch-head data visualization tool:

tar zvxf elasticsearch-head.tar.gz -C /opt/
cd /opt/elasticsearch-head/
npm install

(5) Modify the main configuration file:

vim /etc/elasticsearch/elasticsearch.yml
末尾插入以下两行代码:
http.cors.enabled: true
http.cors.allow-origin: "*"

insert image description here
(6) Start elasticsearch-head

cd /opt/elasticsearch-head/
npm run start &  //放在后台运行

At this point, you can check the status of the two ports 9100 and 9200:

netstat -lnupt |grep 9100

netstat -lnupt |grep 9200

node1:
insert image description here
node2:
insert image description here

Step 4: Create Index

You can create a new index directly:
insert image description here
You can also enter the following command to create an index:

curl -XPUT '192.168.220.136:9200/index-demo/test/1?pretty&pretty' -H 'content-Type: application/json' -d '{"user":"zhangsan","mesg":"hello world"}'
//索引名为 index-demo,类型为test

insert image description here
Refresh the browser, and you will see the index information just created. It can be seen that the index is divided into 5 segments by default, and there is a copy.
insert image description here
insert image description here

Step 5: Install logstash and do some log collection and output to elasticsearch

(1) Modify the host name

hostname apache

(2) Install the Apache service:

systemctl stop firewalld.service
setenforce 0
yum install httpd -y
systemctl start httpd.service

(3) Install logstash

rpm -ivh logstash-5.5.1.rpm
systemctl start logstash
systemctl enable logstash
ln -s /usr/share/logstash/bin/logstash /usr/local/bin/    //创建软连接到bin目录下

(4) Whether the functions of logstash (Apache) and elasticsearch (node) are normal, do a docking test:

  • You can use the logstash command to test:
[root@apache bin]# logstash
-f:可以指定 logstash的配置文件,根据配置文件配置 logstash
-e:后面跟着字符串,该字符串可以被当做 logstash 的配置(如果是“”,则默认使用stdin作为输入、stdout作为输出)
-t:测试配置文件是否正确,然后退出

(5) Standard input is used for input and standard output is used for output:

logstash -e 'input { stdin{} } output { elasticsearch { hosts=>["192.168.220.136:9200"] } }'

At this point, the browser visits http://192.168.220.136:9200/ to view the index information, and there will be more logstash-2019.12.17
insert image description here
insert image description here
(6) Log in to the Apache host and do the docking configuration:

  • The logstash configuration file is mainly composed of three parts: input, output, filter (this depends on the situation)
给日志文件授权:
chmod o+r /var/log/messages

创建并编辑配置文件:
vim /etc/logstash/conf.d/system.conf
input {
    
    
      file{
    
    
        path => "/var/log/messages"
	    type => "system"
	    start_position => "beginning"
          }
       }
output {
    
    
        elasticsearch {
    
    
         hosts => ["192.168.220.136:9200"]
         index => "system-%{+YYYY.MM.dd}"
         }
 }

Restart the service:

systemctl restart logstash.service

(7) Browser view index information: there will be more system-2019.12.17
insert image description here
insert image description here

Step 6: install kibana on node1 host

rpm -ivh kibana-5.5.1-x86_64.rpm
cd /etc/kibana/
cp kibana.yml kibana.yml.bak

vim kibana.yml
修改,开放以下功能:
server.port: 5601        //开发端口
server.host: "0.0.0.0"    //监听所有
elasticsearch.url: "http://192.168.220.136:9200"  
kibana.index: ".kibana"

重启服务:
systemctl start kibana.service 

(1)

Browser access: 192.168.220.136:5601
Next, create an index name in the visual interface: system-* (connect to system log files)
insert image description here
insert image description here
(2) connect to the Apache log files of the Apache host (including normal access and errors)

cd /etc/logstash/conf.d/

vim apache_log.conf  //创建配置文件,添加以下代码:

input {
    
    
      file{
    
    
       path => "/etc/httpd/logs/access_log"
       type => "access"
       start_position => "beginning"
       }
      file{
    
    
       path => "/etc/httpd/logs/error_log"
       type => "error"
       start_position => "beginning"
       }
}
output {
    
    
        if [type] == "access" {
    
    
        elasticsearch {
    
    
             hosts => ["192.168.220.136:9200"]
             index => "apache_access-%{+YYYY.MM.dd}"
            }
        }
         if [type] == "error"{
    
    
        elasticsearch {
    
    
             hosts => ["192.168.220.136:9200"]
             index => "apache_error-%{+YYYY.MM.dd}"
            }
        }
}

Restart the service:

/usr/share/logstash/bin/logstash -f apache_log.conf

On the visual interface, create two indexes :

1. apache_access-*
2. apache_error-*
insert image description here
After a while, you can see these two log files in Discover:
insert image description here
insert image description here
Because we have made synchronous backups for node nodes before, and also improved the disaster recovery capability, a Downtime will not cause data loss.

Guess you like

Origin blog.csdn.net/weixin_45409371/article/details/103568636