ELK-monitor nginx log (alert alarm)

Environmental preparation

Prepare two centos7
configuration: 2 core 2G memory
ip address:

192.168.153.179:

Required installation services:

  • jdk
  • kibana
  • elasticsearch

Host name (easy to understand)
namely: elasticearsh+kibana

  • ek

192.168.153.178:
Required installation services:

  • jdk
  • logstash

Host name:
Logstash:

  • log

Start operation

1. Upload the installation package to /usr/loca/src (I put it here, you can choose the upload path voluntarily)
ek host operation:

[root@ek ELK]# ls
elasticsearch-6.6.2.rpm  jdk-8u131-linux-x64_.rpm  kibana-6.6.2-x86_64.rpm
[root@ek ELK]# pwd
/usr/local/src/ELK

Log host operation:

[root@log ELK]# ls
jdk-8u131-linux-x64_.rpm  logstash-6.6.0.rpm
[root@log ELK]# ls
jdk-8u131-linux-x64_.rpm  logstash-6.6.0.rpm
[root@log ELK]# pwd
/usr/local/src/ELK

2. Turn off the firewall

Do the same operation on both:

[root@ek ELK]# systemctl stop firewalld
[root@ek ELK]# setenforce 0

3. Time synchronization
Do the same operation on the two computers:

[root@ek ELK]# ntpdate pool.ntp.org

If there is no such command: install as follows

[root@ek ELK]# rpm -qa |grep ntpdate
ntpdate-4.2.6p5-28.el7.centos.x86_64

4. Install jdk

[root@ek ELK]# rpm -ivh jdk-8u131-linux-x64_.rpm 

verification:

[root@ek ELK]# java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)

5. Install elasticsearch
ek on the host:

[root@ek ELK]# rpm -ivh elasticsearch-6.6.2.rpm 

The configuration is as follows:

[root@ek elasticsearch]# pwd
/etc/elasticsearch
[root@ek elasticsearch]# grep -v "#" elasticsearch.yml 
cluster.name: node
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.153.179
http.port: 9200

Run the elasticsearch service and set it to start automatically:

[root@ek elasticsearch]# systemctl start elasticsearch
[root@ek elasticsearch]# systemctl enable elasticsearch
Created symlink from /etc/systemd/system/multi-user.target.wants/elasticsearch.service to /usr/lib/systemd/system/elasticsearch.service.

Check the port and check whether the service is running normally: the
following is considered successful

[root@ek elasticsearch]# ss -nltp|grep java
LISTEN     0      128     ::ffff:192.168.153.179:9200                    :::*                   users:(("java",pid=15248,fd=204))
LISTEN     0      128     ::ffff:192.168.153.179:9300                    :::*                   users:(("java",pid=15248,fd=191))
[root@ek elasticsearch]# ss -nltp|grep java
LISTEN     0      128     ::ffff:192.168.153.179:9200                    :::*                   users:(("java",pid=15248,fd=204))
LISTEN     0      128     ::ffff:192.168.153.179:9300                    :::*                   users:(("java",pid=15248,fd=191))
[root@ek elasticsearch]# tailf /var/log/elasticsearch/node.log 
[2020-09-18T09:27:07,577][INFO ][o.e.g.GatewayService     ] [node-1] recovered [0] indices into cluster_state
[2020-09-18T09:27:08,297][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-1] adding template [.watches] for index patterns [.watches*]
[2020-09-18T09:27:08,692][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-1] adding template [.watch-history-9] for index patterns [.watcher-history-9*]
[2020-09-18T09:27:08,742][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-1] adding template [.triggered_watches] for index patterns [.triggered_watches*]
[2020-09-18T09:27:08,816][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-1] adding template [.monitoring-logstash] for index patterns [.monitoring-logstash-6-*]
[2020-09-18T09:27:08,891][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-1] adding template [.monitoring-es] for index patterns [.monitoring-es-6-*]
[2020-09-18T09:27:08,950][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-1] adding template [.monitoring-beats] for index patterns [.monitoring-beats-6-*]
[2020-09-18T09:27:08,999][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-1] adding template [.monitoring-alerts] for index patterns [.monitoring-alerts-6]
[2020-09-18T09:27:09,052][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-1] adding template [.monitoring-kibana] for index patterns [.monitoring-kibana-6-*]
[2020-09-18T09:27:09,227][INFO ][o.e.l.LicenseService     ] [node-1] license [bfd054c1-3152-42d9-bb0f-ce904f9e462f] mode [basic] - valid

6. Install logstash
log host operation:

[root@log ELK]# rpm -ivh logstash-6.6.0.rpm

7. Install nginx and start the
log host operation:

Use yum source to install nginx

[root@log ELK]# yum -y install epel-release
[root@log ELK]# yum -y install nginx
[root@log ELK]# nginx

Install the ab pressure measurement tool, and then need to use it

[root@log ELK]# yum -y install httpd-tools

8. Edit the nginx.conf file and regular
log host operations:

[root@log ELK]# cat /etc/logstash/conf.d/nginx.conf 
input{
    
    
	file{
    
    
		path => "/var/log/nginx/access.log"
		type => "nginx-log"
		start_position => "beginning"
	}	
}
filter{
    
    
	grok{
    
    
		match => {
    
    "message" => "%{NGX}"}	
	}
}
output{
    
    
	elasticsearch{
    
    
		hosts => "192.168.153.179:9200"
		index => "nginx_log-%{+YYYY.MM.dd}"
	}
}

Upload the regular path and file to /usr/local/src

[root@log src]# pwd
/usr/local/src
[root@log src]# ls
ELK nginx_reguler_log_path.txt nginx_reguler_log.txt

Move the contents of the nginx_reguler_log.txt file to this directory and rename it to nginx

[root@log src]# cat nginx_reguler_log_path.txt 
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns/nginx
[root@log src]# mv nginx_reguler_log.txt /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns/nginx
[root@log src]# cat /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns/nginx 
NGX %{
    
    IPORHOST:client_ip} (%{
    
    USER:ident}|- ) (%{
    
    USER:auth}|-) \[%{
    
    HTTPDATE:timestamp}\] "(?:%{WORD:verb} (%{NOTSPACE:request}|-)(?: HTTP/%{NUMBER:http_version})?|-)" %{
    
    NUMBER:status} (?:%{
    
    NUMBER:bytes}|-) "(?:%{URI:referrer}|-)" "%{GREEDYDATA:agent}"

9. Give /var/log permission to
log host operations:

[root@log conf.d]# chmod -R 777 /var/log

10. Start logstash

[root@log src]# systemctl start logstash

Wait for a period of time to monitor whether port 9600 is activated

[root@log src]# ss -nltp|grep 9600
LISTEN     0      50        ::ffff:127.0.0.1:9600                    :::*                   users:(("java",pid=62130,fd=89))

ab pressure measurement

[root@log conf.d]# ab -n10 -c10 http://192.168.153.179/index.html

11. Install kibana
es host operation:

[root@ek ELK三剑客]# yum -y install kibana-6.6.2-x86_64.rpm

Modify kibana main configuration file:

[root@ek ELK三剑客]# grep -Ev '#|^$' /etc/kibana/kibana.yml 
server.port: 5601
server.host: "192.168.153.179"
elasticsearch.hosts: ["http://192.168.153.179:9200"]
  • server.port: kibana server port number
  • server.host: kibana server host IP
  • elasticsearch.hosts: elasticsearch主机IP

Command to detect nginx index

[root@ek ELK三剑客]# curl -X GET http://192.168.153.179:9200/_cat/indices?v
health status index                uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .kibana_1            O38zv0b8RzORzBYO1gFW8Q   1   0          1            0      5.1kb          5.1kb
yellow open   nginx_log-2020.09.18 H-skwNRQRTi5RYQO7aOtAA   5   1         21            0     68.5kb         68.5kb

12. Visit the browser to view the nginx index
Insert picture description here
Insert picture description here
Insert picture description here

Insert picture description here
Warning: If this picture appears, you can re-test the pressure

Insert picture description here

Insert picture description here

Start deployment of alert environment

All operations on the log host:

1: Install python3 environment

[root@log alter]# yum -y install gcc gcc-c++ openssl-devel

Go to the alert directory to decompress the python package and switch to this directory to compile and install

[root@log alter]# ls
Python-3.6.2.tgz  v0.2.1_elasticalert.tar.gz
[root@log alter]# pwd
/usr/local/src/alter
[root@log alter]# tar xf Python-3.6.2.tgz 
[root@log alter]# cd Python-3.6.2
[root@log Python-3.6.2]# ./configure --prefix=/usr/local/python3 --with-openssl && make && make install

2. Set soft link

[root@log Python-3.6.2]# rm -rf /usr/bin/python
[root@log Python-3.6.2]# ln -s /usr/local/python3/bin/python3.6 /usr/bin/python
[root@log Python-3.6.2]# ln -s /usr/local/python3/bin/pip3.6 /usr/bin/pip

3. Fix yum command

[root@log ~]# sed -i 's/python/python2/' /usr/bin/yum 
[root@log ~]# sed -i 's/python/python2/' /usr/libexec/urlgrabber-ext-down 

4. Install the alert plugin

Unzip, rename and install dependencies

[root@log alter]# ls
Python-3.6.2  Python-3.6.2.tgz  v0.2.1_elasticalert.tar.gz
[root@log alter]# pwd
/usr/local/src/alter
[root@log alter]# tar xf v0.2.1_elasticalert.tar.gz 
[root@log alter]# mv elastalert-0.2.1/ /usr/local/elastalert
[root@log alter]# cd /usr/local/elastalert/
[root@log elastalert]# pip install -r requirements.txt

Upgrade

[root@log elastalert]# pip install --upgrade pip

Execute the following commands (generate four commands)

[root@log elastalert]# python setup.py install

Create soft link

[root@log ~]# ln -s /usr/local/python3/bin/elastalert* /usr/bin/

Just call the command directly

lrwxrwxrwx. 1 root root        33 9月  19 12:10 elastalert -> /usr/local/python3/bin/elastalert
lrwxrwxrwx. 1 root root        46 9月  19 12:10 elastalert-create-index -> /usr/local/python3/bin/elastalert-create-index
lrwxrwxrwx. 1 root root        50 9月  19 12:10 elastalert-rule-from-kibana -> /usr/local/python3/bin/elastalert-rule-from-kibana
lrwxrwxrwx. 1 root root        43 9月  19 12:10 elastalert-test-rule -> /usr/local/python3/bin/elastalert-test-rule

5. Set elastalert index

[root@log ~]# elastalert-create-index 
Enter Elasticsearch host: 192.168.153.179
Enter Elasticsearch port: 9200
Use SSL? t/f: f
Enter optional basic-auth username (or leave blank): 
Enter optional basic-auth password (or leave blank): 
Enter optional Elasticsearch URL prefix (prepends a string to the URL of every request): 
New index name? (Default elastalert_status) 
New alias name? (Default elastalert_alerts) 
Name of existing index to copy? (Default None) 
Traceback (most recent call last):
  • Enter Elasticsearch host: 192.168.153.179 #Enter the elasticsearch host IP
  • Enter Elasticsearch port: 9200 #Enter elasticsearch listening port
  • Use SSL? t/f: f #Enter f (means ssl is not enabled)
  • Then press enter all the way

6. Set the main configuration file config.yaml for alert

Change name

[root@log elastalert]# pwd
/usr/local/elastalert
[root@log elastalert]# mv config.yaml.example config.yaml

Configuration details

[root@log elastalert]# grep -Ev '#|^$' config.yaml 
rules_folder: example_rules
run_every:
  minutes: 1
buffer_time:
  minutes: 15
es_host: 192.168.153.179
es_port: 9200
writeback_index: elastalert_status
writeback_alias: elastalert_alerts
alert_time_limit:
  days: 2

Configuration details
here is not a code block, just to show the format clearly

rules_folder: example_rules # 用来放置 告警规则的
run_every:
  minutes: 1   #设置告警执行的频率(一分钟运行一次!!)
buffer_time:
  minutes: 15  # 设置请求里时间字段的范围(举个例子:15:30-15.45分区间的log信息。)
es_host: 192.168.53.179   # elasticsearch 的主机信息
es_port: 9200   # es的端口信息
writeback_index: elastalert_status  # 创建的index 名称
alert_time_limit:
  days: 2	# 失败重试的时间限制

7. Set alarm rules

Copy a nginx yaml file

[root@log example_rules]# pwd
/usr/local/elastalert/example_rules
[root@log example_rules]# cp example_frequency.yaml nginx_frequency.yaml

Configuration details

[root@log example_rules]# grep -Ev '#|^$' nginx_frequency.yaml 
es_host: 192.168.153.179
es_port: 9200
name: nginx frequency rule
type: frequency
index: nginx_log*
num_events: 5
timeframe:
  hours: 1
filter:
- term:
    status: "404"
alert:
- "email"
email:
- "[email protected]"
smtp_host: smtp.qq.com
smtp_port: 25
smtp_auth_file: /usr/local/elastalert/email_auth.yaml
from_addr: [email protected]

Configuration details
here is not a code block

es_host: 192.168.153.179 # elasticsearch主机信息
es_port: 9200  # elasticsearch监听的端口号
name: nginx frequency rule  # 设置告警规则的名称
type: frequency # 设置告警规则的类型(频率)
index: nginx_log*  # 设置监听的index 名称
num_events: 5  # 设置在限定的时间内,触发的次数
timeframe:
  hours: 1   # 设置限定时间
filter:
  - regexp:
      message: ".*"   #表示message 字段下,只要有内容,并且在1小时内触发了5次就告警!!
alert:
- "email"   # 设置邮件告警

email:
- "[email protected]"
- "[email protected]"
- "[email protected]"  # 设置接收告警的邮箱地址
smtp_host: smtp.qq.com  # 设置smtp的地址
smtp_port: 25   #设置smtp监听端口号
smtp_auth_file: /usr/local/elastalert/email_auth.yaml  # 设置smtp 验证信息
from_addr: [email protected]   # 设置发送邮件的邮箱地址

Need to write a file,
write your email and authorization code here

[root@log elastalert]# pwd
/usr/local/elastalert
[root@log elastalert]# cat email_auth.yaml 
user: "[email protected]"
password: "pcojgcyggptsdjjh"

8. Verify that the mail exists and can be sent normally . Use the built-in mail to send mail under linux (super simple and interested parties can visit my previous brief introduction to the use of the mailx command, and I will send the mail directly

[root@log ~]# rpm -qa |grep mailx

Here, the author did not install this software, install the software

[root@log ~]# yum -y install mailx

Send test mail service configuration is normal

[root@log ~]# echo "yes/no" |mail -s "test" [email protected]

9. If the status code in the nginx log contains 404, an alarm is triggered.
Non-code block

filter:
- term:
    status: "404"

10. Run the alert service (open two session ports to test whether the pressure test alarms)
Session 1

[root@log elastalert]# elastalert --config /usr/local/elastalert/config.yaml --rule /usr/local/elastalert/example_rules/nginx_frequency.yaml --verbose
1 rules loaded
INFO:elastalert:Starting up
INFO:elastalert:Disabled rules are: []
INFO:elastalert:Sleeping for 59.999755 seconds

Session 2: The pressure measurement was changed to the wrong pressure measurement page during pressure measurement, resulting in a 404 error

[root@log ~]# ab -n100 -c100 http://192.168.153.178/indasdex.htmla

If the following email alarm appears, it is deemed successful

Insert picture description here

At this point, our ELK monitoring nginx log plus alert alarm is done! There will be ELFK related blogs online later, so stay tuned...

Guess you like

Origin blog.csdn.net/qq_49296785/article/details/108657758