EFK+kafka cache to collect NGINX status code, pv uv, visit trend, top ten visits

EFK flow chart

Insert picture description here

1. Prepare the environment

1. Prepare three centos7 virtual machines

Insert picture description here

2. Turn off the firewall

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# setenforce 0

3. Synchronize time

[root@localhost ~]# yum -y install ntpdate
[root@localhost ~]# ntpdate pool.ntp.org

4. Rename the three hosts to kafka...

[root@localhost ~]# hostname kafka1  136主机
[root@localhost ~]# hostname kafka2  137主机
[root@localhost ~]# hostname kafka3  138主机

5. Add three domain name resolution stations

[root@localhost ~]# vim /etc/hosts
192.168.27.136 kafka1
192.168.27.137 kafka2
192.168.27.138 kafka3

6. Upload the installation package (three sets are distributed by themselves)

The first 136Insert picture description here
The second 137Insert picture description here
Third station 138Insert picture description here

2. Installation and deployment

1. Install all three of the jdk environment

[root@kafka1 src]# rpm -ivh jdk-8u131-linux-x64_.rpm 
准备中...                          ################################# [100%]
正在升级/安装...
   1:jdk1.8.0_131-2000:1.8.0_131-fcs  ################################# [100%]
Unpacking JAR files...
	tools.jar...
	plugin.jar...
	javaws.jar...
	deploy.jar...
	rt.jar...
	jsse.jar...
	charsets.jar...
	localedata.jar...
[root@kafka1 src]# java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
[root@kafka1 src]# 

2. Install zookeeper all three

Unzip and install
[root@kafka3 src]# tar xzf zookeeper-3.4.14.tar.gz 
[root@kafka3 src]# mv zookeeper-3.4.14 /usr/local/zookeeper
[root@kafka3 zookeeper]# cd /usr/local/zookeeper/conf/
[root@kafka3 conf]# ls
configuration.xsl  log4j.properties  zoo_sample.cfg
[root@kafka3 conf]# mv zoo_sample.cfg zoo.cfg 
Configure zookeeper to add all three
[root@kafka3 conf]# vim zoo.cfg 

# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=192.168.27.136:2888:3888
server.2=192.168.27.137:2888:3888
server.3=192.168.27.138:2888:3888
Create a directory and start in sequence
[root@kafka1 conf]# mkdir /tmp/zookeeper
[root@kafka1 conf]# echo "1" > /tmp/zookeeper/myid
[root@kafka1 conf]# /usr/local/zookeeper/bin/zkServer.sh start
[root@kafka2 conf]# mkdir /tmp/zookeeper
[root@kafka2 conf]# echo "2" > /tmp/zookeeper/myid
[root@kafka2 conf]# /usr/local/zookeeper/bin/zkServer.sh start
[root@kafka3 conf]# mkdir /tmp/zookeeper
[root@kafka3 conf]# echo "3" > /tmp/zookeeper/myid
[root@kafka3 conf]# /usr/local/zookeeper/bin/zkServer.sh start
View zookeeper
[root@kafka1 src]# /usr/local/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower

[root@kafka2 src]# /usr/local/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower

[root@kafka3 src]# /usr/local/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: leader

3. Install Kafka and install all three

Unzip and install
[root@kafka1 src]# tar xzf kafka_2.11-2.2.0.tgz 
[root@kafka1 src]# mv kafka_2.11-2.2.0 /usr/local/kafka
Edit configuration file
[root@kafka1 src]# vim /usr/local/kafka/config/server.properties 

Insert picture description hereInsert picture description here

Start view port
[root@kafka1 src]# /usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties
[root@kafka1 src]# netstat -nltpu |grep 9092
tcp6       0      0 :::9092                 :::*                    LISTEN      26868/java          

Verify kafka

Create topic
[root@kafka1 src]# /usr/local/kafka/bin/kafka-topics.sh --create --zookeeper 192.168.27.136:2181  --replication-factor 2 --partitions 3 --topic wg007
Created topic wg007.
Verify topic
[root@kafka1 src]# /usr/local/kafka/bin/kafka-topics.sh --list --zookeeper 192.168.27.136:2181
wg007
Simulated producer
[root@kafka01 config]# /usr/local/kafka/bin/kafka-console-producer.sh --broker-list 192.168.27.136:9092 --topic wg007
>宫保鸡丁
Simulated consumer
[root@kafa02 config]# /usr/local/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.27.136:9092  --topic wg007 --from-beginning
宫保鸡丁

4. Install nginx filebeat

Install on one of them (mine is 137)
注意:两个得安装在一台 因为用filebeat收集nginx日志 ,filebeat不能跨主机收集
先安装epel源
[root@kafka2 ~]# yum -y install epel-release
安装nginx
[root@kafka2 ~]# yum -y install nginx
安装filebeat
[root@kafka2 ~]# rpm -ivh filebeat-6.8.12-x86_64.rpm
启动nginx
[root@kafka2 ~]# systemctl start nginx
安装httpd-tolls 压测下nginx 产生点日志
[root@kafka2 ~]# yum -y install httpd-tools
[root@kafka2 ~]# ab -n 1000 -c 1000 http://192.168.27.137/
Configure filebeat file
[root@kafka2 ~]# vim /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log

output.kafka:
  enabled: true
  hosts: ["192.168.27.136:9092","192.168.27.137:9092","192.168.27.138:9092"]
  topic: nginx
  
重新启动 filebeat

Verify that Kafka has received filebeat data
先看下有没有nginx的topic
[root@kafka2 ~]# /usr/local/kafka_2.11-2.2.0/bin/kafka-topics.sh --list --zookeeper 192.168.27.136:2181
nginx

在模拟下消费着看下
[root@kafka2 ~]# /usr/local/kafka_2.11-2.2.0/bin/kafka-console-consumer.sh --bootstrap-server 192.168.27.136:9092 --topic nginx --from-beginning

5. Install and configure elasticsearch

Just install on one machine (I am at 136)
[root@kafka1 ~]# rpm -ivh elasticsearch-6.6.2.rpm
Configure elasticsearch
[root@kafka1 ~]# vim /etc/elasticsearch/elasticsearch.yml 

Insert picture description here
Insert picture description here

启动elasticsearch
[root@kafka1 ~]# systemctl start elasticsearch

6. Install and configure logsatsh

Installed on one machine (mine is on 137)
[root@kafka2 ~]# rpm -ivh logstash-6.6.0.rpm
Configure logsaths
[root@kafka2 ~]# vim /etc/logstash/conf.d/nginx.conf
input{
    
    
        kafka{
    
    
                bootstrap_servers => ["192.168.27.136:9092,192.168.27.137:9092,192.168.27.138:9092"]
                group_id => "logstash"
                topics => "nginx"
                consumer_threads => 5
        }

}
filter {
    
    
        json{
    
    
                source => "message"
        }
        
        mutate {
    
    
                remove_field => ["host","prospector","fields","input","log"]
        }
        grok {
    
    
                match => {
    
     "message" => "%{NGX}" }
        }

}

output{
    
     
        elasticsearch {
    
    
                hosts => "192.168.27.136:9200"
                index => "nginx-%{+YYYY.MM.dd}"
        }
}
Add a regular expression
[root@kafka2 ~]# vim /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns/nginx
NGX %{
    
    IPORHOST:client_ip} (%{
    
    USER:ident}|- ) (%{
    
    USER:auth}|-) \[%{
    
    HTTPDATE:timestamp}\] "(?:%{WORD:verb} (%{NOTSPACE:request}|-)(?: HTTP/%{NUMBER:http_version})?|-)" %{
    
    NUMBER:status} (?:%{
    
    NUMBER:bytes}|-) "(?:%{URI:referrer}|-)" "%{GREEDYDATA:agent}"
Restart logstash to view logs
root@kafka2 ~]# systemctl restart logstash
[root@kafka2 ~]# tailf /var/log/logstash/logstash-plain.log 
[2020-09-22T21:16:16,084][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-2, groupId=logstash] Successfully joined group with generation 6
[2020-09-22T21:16:16,084][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-2, groupId=logstash] Setting newly assigned partitions []
[2020-09-22T21:16:16,084][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-0, groupId=logstash] Successfully joined group with generation 6
[2020-09-22T21:16:16,085][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-4, groupId=logstash] Successfully joined group with generation 6
[2020-09-22T21:16:16,085][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-4, groupId=logstash] Setting newly assigned partitions []
[2020-09-22T21:16:16,091][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-0, groupId=logstash] Successfully joined group with generation 6
[2020-09-22T21:16:16,092][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-0, groupId=logstash] Setting newly assigned partitions [nginx-0]
[2020-09-22T21:16:16,119][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-3, groupId=logstash] Setting newly assigned partitions []
[2020-09-22T21:16:16,119][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-0, groupId=logstash] Setting newly assigned partitions [msg-0]
[2020-09-22T21:16:16,154][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {
    
    :port=>9600}
[2020-09-22T21:16:16,241][INFO ][org.apache.kafka.clients.consumer.internals.Fetcher] [Consumer clientId=logstash-0, groupId=logstash] Resetting offset for partition nginx-0 to offset 1000.
没有报错

7. Install and configure kibana

Installed on one machine (I am at 138)
	[root@kafka3 ~]# rpm -ivh kibana-6.6.2-x86_64.rpm
Place kibana
[root@kafka3 ~]# vim /etc/kibana/kibana.yml 

Insert picture description here
Insert picture description here

Dynamic kibana
[root@kafka3 ~]# systemctl start kibana
Check the elasticsearch log and receive data
[root@kafka1 ~]# tailf /var/log/elasticsearch/wg007.log 
[2020-09-22T20:28:50,252][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-1] adding template [.monitoring-kibana] for index patterns [.monitoring-kibana-6-*]
[2020-09-22T20:28:50,370][INFO ][o.e.l.LicenseService     ] [node-1] license [1c133ff2-d40d-4e30-9bd7-e4f937d362bc] mode [basic] - valid
[2020-09-22T20:29:52,112][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-1] adding template [logstash] for index patterns [logstash-*]
[2020-09-22T20:30:16,281][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [msg-2020.09.22] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings []
[2020-09-22T20:30:16,761][INFO ][o.e.c.m.MetaDataMappingService] [node-1] [msg-2020.09.22/Zwzx43dCTVGHTHW5D7YpUg] create_mapping [doc]
[2020-09-22T20:30:28,648][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [.kibana_1] creating index, cause [api], templates [], shards [1]/[1], mappings [doc]
[2020-09-22T20:30:28,651][INFO ][o.e.c.r.a.AllocationService] [node-1] updating number_of_replicas to [0] for indices [.kibana_1]
[2020-09-22T20:30:28,865][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-1] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[2020-09-22T20:30:28,903][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-1] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[2020-09-22T20:31:24,158][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-1] adding template [kibana_index_template:.kibana] for index patterns [.kibana]

有kibana啦 如果没nginx 在压测一下

[root@kafka2 ~]# ab -n 1000 -c 1000 http://192.168.27.137/
再次查看
[root@kafka1 ~]# tailf /var/log/elasticsearch/wg007.log 
[2020-09-22T21:25:20,673][INFO ][o.e.c.m.MetaDataMappingService] [node-1] [nginx-2020.09.22/X_m8nXLrQb2y4-b5FLHMkA] create_mapping [doc]
这就有啦

8. Open the kibana page to create an nginx index

Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here

9. Add nginx visualization

nginx status code

Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here

nginx pv value

Insert picture description here
Insert picture description here
Insert picture description here

Nginx visit trend

Insert picture description here
Insert picture description here
Insert picture description here

Top ten visits to nginx

Insert picture description here

Insert picture description here

Add visualization

Insert picture description here
Insert picture description here
Insert picture description here

Guess you like

Origin blog.csdn.net/Q274948451/article/details/108703755