A brief introduction to ELK 2

learning target

  • Ability to deploy kibana and connect to elasticsearch cluster
  • Able to view elasticsearch index information through kibana
  • Know the advantages of using filebeat to collect logs compared to logstash
  • Able to install filebeat
  • Able to use filebeat to collect logs and transfer them to logstash

kibana

Introduction to kibana

Kibana is an open source visualization platform that provides a friendly web interface for the management of ElasticSearch clusters, helping to summarize, analyze and search important log data.

文档路道: Set up | Kibana Guide [8.11] | Elastic

kibana department

Step 1: Install kibana on the kibana server (in my case VM1)

[root@vm1 ~]# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.5.2-x86_64.rpm
[root@vm1 ~]# rpm -ivh kibana-6.5.2-x86_64.rpm 

Step 2: Configure kibana

[root@vm1 ~]# cat /etc/kibana/kibana.yml |grep -v '#' |grep -v '^$'
[root@vm1 ~]# chown kibana.kibana /var/log/kibana.log
[root@vm1 ~]# touch /var/log/kibana.log
You need to create the log yourself and modify the owner and group attributes
logging.dest: /var/log/kibana.log I have added kibana logs here to facilitate troubleshooting and debugging
elasticsearch.url: "http://10.1.1.12:9200" ES cluster path
server.host: "0.0.0.0" Listen to all, allow everyone to access
server.port: 5601 port

Step 3: Start kibana service

[root@vm1 ~]# systemctl start kibana
[root@vm1 ~]# systemctl enable kibana
​
[root@vm1 ~]# lsof -i:5601
COMMAND   PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
node    10420 kibana   11u  IPv4 111974      0t0  TCP *:esmagent (LISTEN)

Step 4: Access via browser http://kibanaServer IP: 5601

Chinese version of kibana

https://github.com/anbai-inc/Kibana_Hanization/

[root@vm1 ~]# wget https://github.com/anbai-inc/Kibana_Hanization/archive/master.zip
[root@vm1 ~]# unzip Kibana_Hanization-master.zip -d /usr/local
[root@vm1 ~]# cd /usr/local/Kibana_Hanization-master/
Note here: 1. Python needs to be installed; 2. The installation directory of the rpm version of kibana is /usr/share/kibana/
[root@vm1 Kibana_Hanization-master]# python main.py /usr/share/kibana/
You need to restart after Chinese translation
[root@vm1 Kibana_Hanization-master]# systemctl stop kibana
[root@vm1 Kibana_Hanization-master]# systemctl start kibana

Access via browser again http://kibanaServer IP:5601

View cluster information through kibana

View the log index collected by logstash through kibana

Last click to discover and view

Make visual graphics through kibana

filebeat

Becauselogstash consumes too much memory and other resources. If logstash is installed on all services to be collected, this will increase the pressure on the application server. . Therefore, we need to use lightweight collection tools to be more efficient and save resources.

beats is a lightweight log collection and processing tool. Beats takes up less resources.

  • Packetbeat: Network data (collects network traffic data)

  • Metricbeat: Metrics (collects data such as CPU and memory usage at the system, process and file system levels)

  • Filebeat: File (collect log file data)

  • Winlogbeat: windows event log (collects Windows event log data)

  • Auditbeat: Audit data (collect audit logs)

  • Heartbeat: Runtime monitoring (collecting data while the system is running)

We mainly collect log information here, so we only discuss filebeat.

filebeat can directly transmit the collected log data to the ES cluster (EFK), or to the logstash (==5044== port take over).

filebeat collects logs and transmits them directly to the ES cluster

Step 1: Download and install filebeat (open another virtual machine vm4 to simulate filebeat, 1G of memory is enough, install filebeat)

[root@vm4 ~]# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.5.2-x86_64.rpm
[root@vm4 ~]# rpm -ivh filebeat-6.5.2-x86_64.rpm

Step 2: Configure filebeat to collect logs

[root@vm4 ~]# cat /etc/filebeat/filebeat.yml |grep -v '#' |grep -v '^$'
filebeat.inputs:
- type: log
  enabled: true                     改为true
  paths:
    - /var/log/*.log                收集的日志路径
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 3
setup.kibana:
output.elasticsearch:               输出给es集群
  hosts: ["10.1.1.12:9200"]         es集群节点ip
processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

Step 3: Start the service

[root@vm4 ~]# systemctl start filebeat
[root@vm4 ~]# systemctl enable filebeat

Step 4: Verify

Verify on es-head and kibana (the verification process is omitted, refer to the previous notes)

Exercise:You can try to use two filebeats to collect logs, and then use filters to filter and view them in kibana. (You can first close logstash on the logstash station and install filebeat for testing)

filebeat transferred to logstash

Step 1: Reconfigure logstash, open port 5044 for filebeat connection, and restart the logstash service

[root@vm3 ~]# vim /etc/logstash/conf.d/test.conf 
input {
    beats {
        Port => 5044
    }
}
output {
    elasticsearch {
        Hosts => ["10.1.1.12:9200"]
        index => "filebeat2-%{+YYYY.MM.dd}"
    }
    stdout { Add a standard output to the screen to facilitate debugging in the experimental environment
    }
}
[root@vm3 ~]# cd /usr/share/logstash/bin/
If you have used the background to run the logstash instance before, please kill it first
[root@vm3 bin]# pkill java
[root@vm3 bin]# ./logstash --path.settings /etc/logstash/ -r -f /etc/logstash/conf.d/test.conf

Step 2: Configure filebeat to collect logs

[root@vm4 ~]# cat /etc/filebeat/filebeat.yml |grep -v '#' |grep -v '^$'
  - add_cloud_metadata: ~
  - add_host_metadata: ~
processors:
  hosts: ["10.1.1.13:5044"] IP is the IP of the logstash server; port 5044 corresponds to the configuration on logstash
output.logstash: These two sentences are very important, indicating that the log is output to logstash
setup.kibana:
  index.number_of_shards: 3
setup.template.settings:
  reload.enabled: false
  path: ${path.config}/modules.d/*.yml
filebeat.config.modules:
    - /var/log/*.log collected log path
  paths:
  enabled: true changed to true
- type: log
filebeat.inputs:

Step 3: Start the service

[root@vm4 ~]# systemctl stop filebeat
[root@vm4 ~]# systemctl start filebeat

Step 5: Verify on ES-head

Step 6: Create an index schema in kibana (the process is omitted, refer to the note operation above), and then click Discovery Verification

filebeat collects nginx logs

1, install nginx on the filebeat server and start the service. And use the browser to access and refresh, and the simulation will generate some corresponding logs (==Emphasis==: We are here in a simulated experimental environment, so we must Understand the actual situation is to install filebeat on the nginx server to collect logs)

[root@vm4 ~]# yum install epel-release -y
[root@vm4 ~]# yum install nginx -y
[root@vm4 ~]# systemctl restart nginx
[root@vm4 ~]# systemctl enable nginx

2. Modify the filebeat configuration file and restart the service

[root@vm4 ~]# cat /etc/filebeat/filebeat.yml |grep -v '#' |grep -v '^$'
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/*.log
    - /var/log/nginx/access.log			只在这里加了一句nginx日志路径(按需求自定义即可)
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 3
setup.kibana:
output.logstash:
  hosts: ["10.1.1.13:5044"]
processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  
[root@vm4 ~]# systemctl stop filebeat
[root@vm4 ~]# systemctl start filebeat

3. Verification (query on kibana or es-head)

Exercise: Try to collect httpd, mysql logs

Summary of problems that are likely to arise during experiments:

  • The filebeat configuration did not change output.elasticsearch to output.logstash

  • When filebeat collects /var/log/*.log logs, it needs to change or add data to the logs before transmitting them. When /var/log/yum.log adds log data, it will be transmitted, but it will not trigger other log transmission in the configuration. (Transmission of each log is independent)

  • The logs collected by filebeat do not have an index name defined. My experiment was defined in logstash. (In this example, the index I defined is called filebeat2-%{+YYYY.MM.dd})

  • es-head may be closed due to resource limitations. When you verify on the browser, you may not see the changed results due to caching issues.

  • Distinguish between index names and index pattern names

Simple filtering of filebeat logs

[root@vm4 ~]# grep -Ev '#|^$' /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/yum.log
    - /var/log/nginx/access.log
  include_lines: ['Installed']		表示收集的日志里有Installed关键字才会收集
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 3
setup.kibana:
output.logstash:
  hosts: ["10.1.1.13:5044"]
processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

[root@vm4 ~]# systemctl restart filebeat

Test Methods:

Generate logs and test results throughyum install and yum remove

The result is: yum installThe installation can be collected,yum removeThe uninstallation cannot be collected

Other parameters can be tested by yourself

  • exclude_lines

  • exclude_files

Guess you like

Origin blog.csdn.net/qq_57747969/article/details/134913412