ELK log analysis system: theory plus experiment

Introduction to ELK log analysis system:

Log server
Provides security

● Centralized storage of logs

●Defects

  • Difficulty in analyzing logs

●ELK log analysis process: collect logs and send them to ES for storage. Kibana will display the collected log information to the administrator to view the
Insert picture description here
ELK log analysis system
●Elasticsearch

●Logstash

●Kibana
log processing steps

  • 1. Centralized management of logs
  • 2. Format and output the log to Elasticsearch
  • 3. Index and store the formatted data
  • 4. Display of front-end data

Introduction to Elasticsearch

Elasticsearch overview

●Provides a full-text search engine with distributed multi-user capabilities

Elasticsearch core concept
●Near real-time

  • Lasticsearch is a near real-time search platform, which means that
    there is a slight delay from indexing a document until the document can be searched (usually 1 second)

●Cluster: multiple nodes

  • A cluster is organized by one or more nodes. They hold the entire data together and provide indexing and search functions together. One of the nodes is the master node. This node can be elected and provides cross-node joint indexing and search functions. The cluster has a unique name, the default is elasticsearch, and a cluster can have only one node. It is strongly recommended to configure the cluster mode when configuring elasticseach

●Node

  • A node is a single server, a part of the cluster, storing data and participating in the indexing and search functions of the cluster. Like a cluster, nodes are also identified by names, and the default is a character name randomly assigned when the node is started. Of course, you can define it yourself. Changing the name is also very important. It is used to identify the node corresponding to the server in the cluster.
    Nodes can be added to the cluster by specifying the cluster name. By default, each node is set to join the elasticsearch cluster. If multiple nodes are started, assuming they can automatically discover each other, they will automatically form a cluster called elasticsearch

●Index

  • An index is a collection of documents with similar characteristics. For example, you can have an index for customer data, another index for a product catalog, and an index for order data. An index is identified by a name (must be all lowercase letters), and when we want to index, search, update, and delete documents corresponding to this index, we must use
    this name. In a cluster, if you want, you can define as many indexes as you want

●Index (library) -> type (table) -> document (record)

●Shards and copies

Introduction to Logstash

●A powerful data processing tool

●Can realize data transmission, format processing, formatted output

●Data input, data processing (such as filtering, rewriting, etc.) and data output

The main components of Logstach
●Shipper: Log collector. Responsible for monitoring the changes of local log culture and collecting the latest content of log files in time

●Indexer: Log saver. Responsible for receiving logs and writing to local files

●Broker: Log Hub is responsible for connecting multiple Shippers and multiple Indexers

●Search and storage: allows searching and storing events

●Web interface: web-based display interface

Kibana

Introduction to Kibana
●An open source analysis and visualization platform for Elasticsearch

●Search, view the data stored in the Elasticsearch index

●Advanced data analysis and display of Kibana's main functions through various charts

  • Elasticsearch seamless integration
  • Integrated data, complex data analysis
  • Let more team members benefit
  • Flexible interface, easier to share
  • Simple configuration, visualized multiple data sources
  • Simple data export

ELK server deployment

lab environment

Insert picture description here

One. Elasticsearch server deployment

Recommended steps: The
operation of node1 and node2 is the same, here only the operation of node1 is shown
1. Modify the host name to distinguish it (the configuration of node1 is shown here)

[root@localhost ~]# setenforce 0     ##关闭临时防护
[root@localhost ~]# iptables -F      ##清空防火墙规则
[root@localhost ~]# hostnamectl set-hostname node1    ##修改主机名为node1方便区分
[root@localhost ~]# su
[root@node1 ~]# vim /etc/hosts    ##设置本地主机映射文件,node1和node2节点操作
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.148.132 node1     ##添加两个IP地址+主机名
192.168.148.133 node2
Node2
[root@node1 ~]# scp /etc/hosts root@192.168.148.133:/etc/hosts     ##将修改信息拷贝到node2节点上

Insert picture description here
2. Put the ES tool in the OPT directory and decompress it (both nodes need to be installed)

[root@node1 ~]# cd /opt/      ##将工具放到/opt目录下

Insert picture description here
3. Change the configuration file

[root@node1 opt]# cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.bak    ##为安全考虑先拷贝一份
[root@node1 elasticsearch]# vim elasticsearch.yml     ##修改配置文件
17 cluster.name: my-elk-cluster   ##17行集群名字
23 node.name: node1         ##23行节点名称
33 path.data: /data/elk_data     ##33行数据文件存放位置(过火自己去创建)
37 path.logs: /var/log/elasticsearch/     ##37行日志文件存放位置
43 bootstrap.memory_lock: false     ##不在启动时锁定内存
55 network.host: 0.0.0.0        ##提供服务的绑定IP地址,0.0.0.0代表所有地址
59 http.port: 9200       ##监听端口为9200
68 discovery.zen.ping.unicast.hosts: ["node1", "node2"]     ##集群发现通过单播实现

Insert picture description here

[root@node1 elasticsearch]# mkdir -p /data/elk_data     ##创建存放路径
[root@node1 elasticsearch]# chown elasticsearch.elasticsearch /data/elk_data/   ##设置属主,属组
[root@node1 elasticsearch]# systemctl start elasticsearch.service    ##开启服务

Insert picture description here
4. At this time, open the Google browser and enter: 192.168.148.132:9200 can access the page.
Insert picture description here
Similarly, node2 is the same configuration, except that the node name on line 23 is changed to node2
node.name: node2
Insert picture description here
5. Check cluster health and status
Check health: address Join /_cluster/health?pretty
Insert picture description here
at the back to check the status: join /_cluster/state?pretty at the back of the address
Insert picture description here

node1和node2安装elasticsearch-head插件(这边展示node1的安装)
[root@node1 elasticsearch]# cd /opt/

Insert picture description here

[root@node1 opt]# yum -y install gcc gcc-c++ make    ##安装环境包 
[root@node1 opt]# tar zxvf node-v8.2.1.tar.gz     ##将软件包进行解压
[root@node1 opt]# cd node-v8.2.1/
[root@node1 node-v8.2.1]# ./configure
[root@node1 node-v8.2.1]# make -j3    ##编译指定线程

Insert picture description here

[root@node1 node-v8.2.1]# make install    ##安装

6. Install phantomjs (front-end framework)

[root@node1 opt]# tar jxvf phantomjs-2.1.1-linux-x86_64.tar.bz2 -C /usr/local/src 
[root@node1 local]# cd /usr/local/src/phantomjs-2.1.1-linux-x86_64
[root@node1 phantomjs-2.1.1-linux-x86_64]# cd bin/
[root@node1 bin]# ls
phantomjs
[root@node1 bin]# cp phantomjs /usr/local/bin/    ##将命令让系统识别
安装elasticsearch-head
[root@node1 opt]# tar zxvf elasticsearch-head.tar.gz -C /usr/local/src/     ##解压文件
[root@node1 opt]# cd /usr/local/src/
[root@node1 src]# ls     
elasticsearch-head  phantomjs-2.1.1-linux-x86_64
[root@node1 src]# cd elasticsearch-head/
[root@node1 elasticsearch-head]# npm install     ##初始化项目

Insert picture description here

[root@node1 elasticsearch-head]# vim /etc/elasticsearch/elasticsearch.yml   ##修改主配置文件然后重启服务
#
#action.destructive_requires_name: true
http.cors.enabled: true         ##底行加入   
http.cors.allow-origin: "*"
[root@node1 elasticsearch-head]# npm run start &   ##启动项目切换到后台运行
[root@node1 elasticsearch-head]# systemctl restart elasticsearch   ##重启ES服务

Insert picture description here
Node2 node
Insert picture description here
7. At this time, after both services are up, directly open the browser and enter http://192.168.148.132:9100/ to do the access test.
Insert picture description here
Enter http://192.168.148.133:9100/ and the information can also be viewed
Insert picture description here
8. Back Go to node1 to create an index as index-demo; type as test, log in to the web page to test
1. Create an index on the web page
Insert picture description here
Insert picture description here
2. Go back to node1 and insert data

[root@node1 /]# curl -XPUT 'localhost:9200/index-demo/test/1?pretty&pretty' -H 'content-Type: application/json' -d '{"user":"lisi","mesg":"hello word"}'    ##索引类型为test ,用户为“lisi”

Insert picture description here
3. Back to the client to view
Insert picture description here
Insert picture description here
192.168.148.132 is the same view
Insert picture description here

two. logstash server deployment

Open a new node to install logstash and do some log collection and output to elasticsearch
1. Turn off the firewall setting

[root@localhost ~]# systemctl stop firewalld.service 
[root@localhost ~]# iptables -F

2. Modify the host name to distinguish

[root@localhost ~]# hostnamectl set-hostname apache
[root@localhost ~]# su

3. Install hhttpd service

[root@apache ~]# yum -y install httpd

4. Install logstash and configure

[root@apache ~]# cd /opt/

Insert picture description here

[root@apache opt]# rpm -ivh logstash-5.5.1.rpm    ##安装工具包
[root@apache opt]# systemctl start logstash.service    ##开启服务
[root@apache opt]# ln -s /usr/share/logstash/bin/logstash /usr/local/bin/   ##将命令建立软链接方便系统识别

Logstash command test field description and explanation:
● -f This option allows you to specify the logstash configuration file, and configure logstash according to the configuration file
● -e followed by a character string can be used as the configuration of logstash (if it is "", the default Use stdin as input and stdout as output)
● -t test whether the configuration file is correct, and then exit

[root@apache opt]# logstash -e 'input { stdin{} } output { stdout{} }'    ##采用标准型输入和输出语法

Insert picture description here
5. Use rubydebug to display detailed output, codec is a kind of codec

[root@apache opt]# logstash -e 'input { stdin{} } output { stdout{ codec=>rubydebug } }'

Insert picture description here
6. Use logstash to write information into elasticsearch input and output docking

[root@apache opt]# logstash -e 'input { stdin{} } output { elasticsearch{ hosts=>["192.168.148.132:9200"] } }'    ##hosts指向主节点node1的地址和端口

Insert picture description here
Go back to the website to refresh and view
Insert picture description here
Insert picture description here
7. Configuration docking

[root@apache log]# cd /var/log
[root@apache log]# chmod o+r messages    ##给系统日志其他人能读的权限
[root@apache conf.d]# ll /var/log/messages
-rw----r--. 1 root root 120713 915 15:55 /var/log/messages    
[root@apache conf.d]# cd /etc/logstash/conf.d/     ##切换到conf.d的路径下
[root@apache conf.d]# vim system.conf    ##编写配置文件
input {
    
    
       file{
    
    
        path => "/var/log/messages"       ##路径
        type => "system"     ##索引类型
        start_position => "beginning"
        }
      }
output {
    
    
        elasticsearch {
    
    
          hosts => ["192.168.148.132:9200"]    ##主机地址
          index => "system-%{+YYYY.MM.dd}"    ##索引开头
          }
        }
[root@apache conf.d]# systemctl restart logstash.service     ##重启服务

8. Open the browser to refresh the visit
Insert picture description here

three. kibana server deployment

1. Configure kibana with the fourth node

[root@localhost ~]# systemctl stop firewalld.service 
[root@localhost ~]# setenforce 0
[root@localhost ~]# hostnamectl set-hostname kibana
[root@localhost ~]# su

Insert picture description here

[root@kibana ~]# rpm -ivh kibana-5.5.1-x86_64.rpm    ##安装工具
[root@kibana ~]# cd /etc/kibana/
[root@kibana kibana]# cp -p kibana.yml kibana.yml.bak    ##拷贝一份备份
[root@kibana kibana]# vim kibana.yml     ##修改配置文件
2 server.port: 5601     ##端口号开启
7 server.host: "0.0.0.0"    ##服务侦听地址
21 elasticsearch.url: "http://192.168.148.132:9200"   ##和ES工具建立联系
30 kibana.index: ".kibana"      ##在elasticsearch中添加.kibana索引
[root@kibana kibana]# systemctl start kibana.service   ##开启服务

2. Open the browser and enter http://192.168.148.135:5601/ to access the page system information test.
Insert picture description here
However, the ES web page must have the index of kibana, so that kibana can collect statistics.
Insert picture description here
You can view the log information
Insert picture description here
4. Go back to the apache node and configure the log file for connecting to the apache host (the access is successful and wrong)

[root@apache conf.d]# vim apache-log.conf    ##创建日志配置文件
input {
    
    
       file{
    
     
        path => "/etc/httpd/logs/access_log"     ##日志文件的绝对路径
        type => "access"     ##文件类型
        start_position => "beginning"
        }
       file{
    
     
        path => "/etc/httpd/logs/error_log"
        type => "error"
        start_position => "beginning"
        }
        
      }
output {
    
    
        if [type] == "access" {
    
         ##做条件判断来创建索引
        elasticsearch {
    
    
          hosts => ["192.168.148.132:9200"]
          index => "apache_access-%{+YYYY.MM.dd}"    ####索引名称
          }
        } 
        if [type] == "error" {
    
    
        elasticsearch {
    
    
          hosts => ["192.168.148.132:9200"]
          index => "apache_error-%{+YYYY.MM.dd}"    
          }
        } 
        }
[root@apache conf.d]# logstash -f apache-log.conf    ##指定配置文件做测试

5. Use the browser to view the ES side again.
Insert picture description here
Insert picture description here
6. Back to the kibana side, you can create two apache indexes
Insert picture description here
Insert picture description here
. So far, the ELK deployment is successful.

Guess you like

Origin blog.csdn.net/Cpureman/article/details/108603406