Introduction and deployment of ELK log analysis

1. Introduction to ELK log analysis system

  • Introduction to Elasticsearch (must build a cluster mode, there is no cluster mode, it is better to use two functions to store log data and index log data)
    Port number 9200 is
    close to real-time, cluster, node, index (index (library)-type (table)- ---Documents (records)), fragments and copies.
  • Logstatsh introduces (just one thing to collect logs, collect server log changes at any time, and pass them to ES for storage)
    a powerful data processing tool that
    can realize data transmission, format processing, formatted output
    data input, data processing (such as filtering, Rewriting, etc.) and data output
    Main components: Shipper, Indexer, Broker, Search and Storage, Web Interface
  • Introduction to Kibana (displayed in view, displayed in various graphical ways, provided is a website link surface, open to see)
    Elasticsearch seamless integration,
    integrated data, complex data analysis,
    flexible interface, easy sharing
    , simple configuration, Visualize multiple data sources and
    simple data export
  • Log processing steps
    1. Centralized management of
    logs 2. Formatting logs (Logstash) and outputting to Elasticsearch
    3. Indexing the formatted data | and storing (Elasticsearch)
    4. Display of front-end data (Kibana)

2. Deploy ELK log system

2.1 Environmental preparation and requirement description

Description of Requirement

  • Configure ELK log analysis cluster
  • Use Logstash to collect logs
  • Use kibana to view and analyze log
    environment preparation
  • Prevent interference, turn off the firewall and core protection of all servers
Host server IP CPU name Server description
CentOS7.6 192.168.233.127 node1 Elasticsearch host point
CentOS7.6 192.168.233.140 node2 Elasticsearch
CentOS7.6 192.168.233.130 apache Install apache service and Logstash to collect logs
CentOS7.6 192.168.233.200 kibana Visual display of log analysis information

Insert picture description here

2.2 Elasticsearch deployment and related plugin installation

2.2.1 Installation of Elasticsearch

root@localhost ~]# hostnamectl set-hostname node1  ## 修改主机名   另一台节点修改为node2
[root@localhost ~]# su
[root@node1 ~]# 

[root@node1 ~]# vim /etc/hosts
192.168.233.127 node1
192.168.233.140 node2
~                      
[root@node1 ~]# java -version      ## 查看java环境,用自带的就行
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)

[root@node1 ~]# cd /opt/
将elasticsearch-5.5.0.rpm 软件包上传
[root@node1 opt]# rpm -ivh elasticsearch-5.5.0.rpm 
[root@node1 opt]# systemctl daemon-reload     ## 重新加载环境
[root@node1 opt]# systemctl  enable elasticsearch.service 

2.2.2 Modify Elasticsearch configuration file

root@node1 opt]# cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.bak ## 留一个配置文件的备份

 [root@node1 opt]# vim /etc/elasticsearch/elasticsearch.yml
17 cluster.name: my-elk-cluster     ## 群集名称要一样,不然就是两个群集
23 node.name: node1        ## node2节点 改为node2  与主机名一致
33 path.data: /data/elk_data      ## 数据存放路径
37 path.logs: /var/log/elasticsearch/       ## 日志存放路径
43 bootstrap.memory_lock: false      ## 引导内存锁定  默认时锁定,即资分配给服务用的资源之后系统不会再给新的资源,这里解开
55 network.host: 0.0.0.0 ## 监听所有地址
59 http.port: 9200
68 discovery.zen.ping.unicast.hosts: ["node1", "node2"] ## 写入群集服务的主机名
[root@node1 opt]# grep -v "^#" /etc/elasticsearch/elasticsearch.yml  ## 检查配置文件
cluster.name: my-elk-cluster
node.name: node1
path.data: /data/elk_data
path.logs: /var/log/elasticsearch/
bootstrap.memory_lock: false
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.unicast.hosts: ["node1", "node2"]
[root@node1 opt]# mkdir -p /data/elk_data      ## 创建数据存放路径
[root@node1 opt]# chown elasticsearch.elasticsearch /data/elk_data/  ## 授权
[root@node1 opt]# systemctl start elasticsearch.service 
[root@node1 opt]# netstat -antp | grep 9200
tcp6       0      0 :::9200                 :::*                    LISTEN      10348/java     

2.2.3 View Elasticsearch cluster information and health test

  • View cluster health
http://192.168.233.140:9200/_cluster/health?pretty ## 查看群集健康情况
{
  "cluster_name" : "my-elk-cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}
  • View cluster information
http://192.168.233.140:9200/_cluster/state?pretty   ## 查看群集状态信息
{
  "cluster_name" : "my-elk-cluster",
  "version" : 5,
  "state_uuid" : "G3B3azPEROaXv2EVH_hW8w",
  "master_node" : "y27xgCLnTpeZ9C7JtzH6tg",
  "blocks" : { },
  "nodes" : {
    "y27xgCLnTpeZ9C7JtzH6tg" : {
      "name" : "node1",
      "ephemeral_id" : "AlDxEWe3S0yZ7NAp_HCFMg",
      "transport_address" : "192.168.233.127:9300",
      "attributes" : { }
    },
    "mwtmWDrgRHau3r23Fdfw2w" : {
      "name" : "node2",
      "ephemeral_id" : "q2Tg7zKYRe2y5vsz6UgRRQ",
      "transport_address" : "192.168.233.140:9300",
      "attributes" : { }
    }
  },
  "metadata" : {
    "cluster_uuid" : "T9wpASvDQxuxDmfyOgHTDg",
    "templates" : { },
    "indices" : { },
    "index-graveyard" : {
      "tombstones" : [ ]
    }
  },
  "routing_table" : {
    "indices" : { }
  },
  "routing_nodes" : {
    "unassigned" : [ ],
    "nodes" : {
      "y27xgCLnTpeZ9C7JtzH6tg" : [ ],
      "mwtmWDrgRHau3r23Fdfw2w" : [ ]
    }
  }
}

2.2.4 Install node component package plugin

【安装elasticsearch-head 插件】  上述查看群集的方式,极其不方便,我们可以通过安装elasticsearch-head插件来管理群集
[root@node1 ~]# cd /opt/
将  node等 安装包拖入node-v8.2.1.tar  phantomjs-2.1.1-linux-x86_64.tar   elasticsearch-head.tar
yum install  gcc gcc-c++ -y
[root@node1 opt]# tar zxvf node-v8.2.1.tar.gz 
[root@node1 opt]# cd node-v8.2.1/
[root@node1 node-v8.2.1]# ./configure 
[root@node1 node-v8.2.1]# make -j3     ## 调用三个核心数编译
[root@node1 node-v8.2.1]# make install

2.2.5 Install phantomjs front-end framework

Prepare phantomjs-2.1.1-linux-x86_64.tar.bz2 package

[root@node1 opt]# tar jxvf phantomjs-2.1.1-linux-x86_64.tar.bz2 
[root@node1 opt]# cd phantomjs-2.1.1-linux-x86_64/bin/
[root@node1 bin]# cp phantomjs /usr/local/bin/
[root@node1 opt]# mv phantomjs-2.1.1-linux-x86_64 /usr/local/src/

2.2.6 Install elasticsearch-head data visualization tool

[root@node1 opt]# tar zxvf elasticsearch-head.tar.gz -C /usr/local/src/
[root@node1 opt]# cd /usr/local/src/elasticsearch-head/
[root@node1 elasticsearch-head]# npm install

2.2.7 Modify the elasticsearch-head configuration file and start the service

root@node1 elasticsearch-head]# cd 
[root@node1 ~]# vim /etc/elasticsearch/elasticsearch.yml   ## 配置文件,插入末尾
http.cors.enabled: true  ## 开启跨域访问支持,默认为faluse
http.cors.allow-origin: "*"      ## 跨域访问允许的域名地址
[root@node1 ~]# systemctl restart elasticsearch.service  ## 重启服务

[root@node1 ~]# cd /usr/local/src/elasticsearch-head/     ## 启动服务器
[root@node1 elasticsearch-head]# npm run start &      ## 切换到后台运行
[1] 111596
[root@node1 elasticsearch-head]# 
> [email protected] start /usr/local/src/elasticsearch-head
> grunt server

Running "connect:server" (connect) task
Waiting forever...
Started connect web server on http://localhost:9100

[root@node1 elasticsearch-head]# netstat -lnutp | grep 9100
tcp        0      0 0.0.0.0:9100            0.0.0.0:*               LISTEN      111606/grunt        
[root@node1 elasticsearch-head]# netstat -lnutp | grep 9200
tcp6       0      0 :::9200                 :::*                    LISTEN      111489/java         

2.2.8 Test

  • Open
    192.168.233.127:9100
    on the real machine and enter 192.168.233.127:9200 behind Elasticsearch ## You can see that the cluster is healthy and green
    Insert picture description here

192.168.233.140:9100
Enter 192.168.233.140:9200 behind Elasticsearch ## You can see that the cluster is healthy and green
Insert picture description here

  • Website login node1 node creation index is index-dex, type is test, it can be created successfully
    Insert picture description here

    • Write data on node1
[root@node1 ~]# curl -XPUT 'localhost:9200/index-demo/test/1?pretty&pretty' -H 'content-Type:application/json' -d '{"user":"zhangsan","mesg":"hello world"}'
{
  "_index" : "index-demo",
  "_type" : "test",
  "_id" : "1",
  "_version" : 1,
  "result" : "created",
  "_shards" : {
    "total" : 2,
    "successful" : 2,
    "failed" : 0
  },
  "created" : true         ## 表示写入成功
}
  • At this time, click refresh in the data browsing of the website, you can see the written data
    Insert picture description here

2.3 Installation of apa server apache service and configuration of Logstash

2.3.1 Installation of apache service and Logstash

[root@apache ~]# yum -y install httpd
[root@apache ~]# systemctl start httpd
[root@apache ~]# java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
准备logstash-5.5.1.rpm 压缩包
[root@apache ~]# rpm -ivh logstash-5.5.1.rpm 
[root@apache ~]# systemctl start logstash
[root@apache ~]# systemctl enable logstash
Created symlink from /etc/systemd/system/multi-user.target.wants/logstash.service to /etc/systemd/system/logstash.service.
[root@apache ~]# ln -s /usr/share/logstash/bin/logstash /usr/local/bin
logstash(apache)与elasticsearch(node)功能是否正常,做对接测试
logstash命令测试:
 -f    指定logstash的配置文件,根据配置文件配置logstash
 -e   该字符可以被当做logstash的配置(如果是“ ”,则默认使用stdin做输入,stdout作为输出)
 -t    测试配置文件是否正确,然后退出

2.3.2 Testing the functions of Logstash

[root@apache ~]# yum -y install httpd
[root@apache ~]# systemctl start httpd
[root@apache ~]# java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
准备logstash-5.5.1.rpm 压缩包
[root@apache ~]# rpm -ivh logstash-5.5.1.rpm 
[root@apache ~]# systemctl start logstash
[root@apache ~]# systemctl enable logstash
Created symlink from /etc/systemd/system/multi-user.target.wants/logstash.service to /etc/systemd/system/logstash.service.
[root@apache ~]# ln -s /usr/share/logstash/bin/logstash /usr/local/bin
logstash(apache)与elasticsearch(node)功能是否正常,做对接测试
logstash命令测试:
 -f    指定logstash的配置文件,根据配置文件配置logstash
 -e   该字符可以被当做logstash的配置(如果是“ ”,则默认使用stdin做输入,stdout作为输出)
 -t    测试配置文件是否正确,然后退出

2.3.3 Docking configuration on the apache server

[root@apache ~]# chmod o+r /var/log/messages  ## 给其他用户加上系统日志可读权限
[root@apache ~]# vim /etc/logstash/conf.d/system.conf
input {
        file{
          path => "/var/log/messages"
          type => "system"
          start_position => "beginning"
          }
      }
output {
       elasticsearch {
         hosts => ["192.168.233.127:9200"]
         index => "system-%{+YYY.MM.dd}"
         }
       }
[root@apache ~]# systemctl restart logstash.service   ## 重启服务

2.4 Installation of kibana

  • Prepare kibana-5.5.1-x86_64.rpm
[root@kibana ~]# mv kibana-5.5.1-x86_64.rpm  /usr/local/src/
[root@kibana ~]# cd /usr/local/src/
[root@kibana src]# rpm -ivh kibana-5.5.1-x86_64.rpm 
[root@kibana ~]# cd /etc/kibana
[root@kibana kibana]# cp -p kibana.yml kibana.yml.bak
[root@kibana kibana]# vim kibana.yml
2 server.port: 5601   ##kibana打开的端口
7 server.host: "0.0.0.0"   ##kibana侦听的地址
 21 elasticsearch.url: "http://192.168.233.127:9200"      ##和elasticsearch建立联系
30 kibana.index: ".kibana"     ##在elasticsearch中添加.kibana索引

[root@kibana kibana]# systemctl enable kibana
Created symlink from /etc/systemd/system/multi-user.target.wants/kibana.service to /etc/systemd/system/kibana.service.
[root@kibana kibana]# systemctl start kibana

2.4.1 kibana configuration

Visit 192.168.233.200:5601 to create an index named system-* ##This is the log file of the docking system
Insert picture description here

  • Log files of the apache server (there are two: access and error)
cd /etc/logstash/conf.d
vim apache_log.conf
      file{
        path => "/etc/httpd/logs/access_log"
      file{
        path => "/etc/httpd/logs/error_log"
        type => "error"
        start_position => "beginning"
        }
      } 
output {
      if [type] == "access" {
      elasticsearch {
        hosts => ["192.168.233.127:9200"]
        index => "apache_access-%{+YYY.MM.dd}"
        }
       }
      if [type] == "error" {
      elasticsearch {
        hosts => ["192.168.233.127:9200"]
        index => "apache_error-%{+YYY.MM.dd}"
        }
      } 
     }
      /usr/share/logstash/bin/logstash -f apache_log.conf   ## Apache和logstash对接联系 

2.5 Testing

Log in to the real machine 192.168.130.10:9100 ##Check the index information and
you will find two files: apache_error-2020.09.14 apache_access-2020.09.14
Insert picture description here

Open 192.168.130.30:5601 to
create two indexes, you can view the log file information through kiban
Insert picture description here

Guess you like

Origin blog.csdn.net/weixin_47219725/article/details/108674500