EFK-installation and deployment (monitor nginx logs)

surroundings:

centos7

Host IP install software
192.168.153.179 jdk,zookeeper,kafka,filebeat,elasticsearch
192.168.153.178 jdk,zookeeper,kafka,logstash
192.168.153.177 jdk,zookeeper,kafka,kibana

Start deployment:

1. Modify the host name

Sequential operation on three platforms

[root@localhost ~]# hostname kafka01
[root@localhost ~]# hostname kafka02
[root@localhost ~]# hostname kafka03
2. Modify the hosts file

Perform the same operation on three

[root@kafka01 ~]# tail -n 3 /etc/hosts
192.168.153.179 kafka01
192.168.153.178 kafka02
192.168.153.177 kafka03
3. Time synchronization

Perform the same operation on three

[root@kafka01 ~]# ntpdate pool.ntp.org
19 Sep 14:00:48 ntpdate[11588]: adjust time server 122.117.253.246 offset 0
4. Turn off the firewall

Perform the same operation on three

[root@kafka01 ~]# systemctl stop firewalld
[root@kafka01 ~]# setenforce 0
5. Install jdk

Perform the same operation on three machines: in the same directory

[root@kafka01 ELK三剑客]# pwd
/usr/local/src/ELK三剑客
[root@kafka01 ELK三剑客]# rpm -ivh jdk-8u131-linux-x64_.rpm
6. Install zookeeper

Perform the same operation on the three units.
Unzip, move and modify the file name under the configuration file

[root@kafka01 EFK]# pwd
/usr/local/src/EFK
[root@kafka01 EFK]# tar xf zookeeper-3.4.14.tar.gz 
[root@kafka01 EFK]# mv zookeeper-3.4.14 /usr/local/zookeeper
[root@kafka01 EFK]# cd /usr/local/zookeeper/conf/
[root@kafka01 conf]# mv zoo_sample.cfg zoo.cfg 
7. Edit the zoo.cfg file

Perform the same operation on three

[root@kafka01 conf]# pwd
/usr/local/zookeeper/conf
[root@kafka01 conf]# tail -n 3 zoo.cfg 
server.1=192.168.153.179:2888:3888
server.2=192.168.153.178:2888:3888
server.3=192.168.153.177:2888:3888
8. Create a data directory

Perform the same operation on three

[root@kafka01 conf]# pwd
/usr/local/zookeeper/conf
[root@kafka01 conf]# mkdir /tmp/zookeeper
9. Configure myid

Execute sequentially on three platforms

[root@kafka01 conf]# echo "1" > /tmp/zookeeper/myid
[root@kafka02 conf]# echo "2" > /tmp/zookeeper/myid
[root@kafka03 conf]# echo "3" > /tmp/zookeeper/myid
10. Run zookeeper service

Perform the same operation on three

[root@kafka01 conf]# /usr/local/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
11. View zookeeper status

Perform the same operation on three

[root@kafka01 conf]# /usr/local/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower

2 followers
1 leader

12, cheap kafka

Perform the same operation on three

[root@kafka01 EFK]# pwd
/usr/local/src/EFK
[root@kafka01 EFK]# tar xf kafka_2.11-2.2.0.tgz 
[root@kafka01 EFK]# mv kafka_2.11-2.2.0 /usr/local/kafka
13. Edit /usr/local/kafka/config/server.properties

The number is the line number
kafka01 host

21 broker.id=0
36 advertised.listeners=PLAINTEXT://kafka01:9092
123 zookeeper.connect=192.168.153.179:2181,192.168.153.178:2181,192.168.153.177:2181

kafka02 host

21 broker.id=1
36 advertised.listeners=PLAINTEXT://kafka02:9092
123 zookeeper.connect=192.168.153.179:2181,192.168.153.178:2181,192.168.153.177:2181

kafka03

21 broker.id=2
36 advertised.listeners=PLAINTEXT://kafka03:9092
123 zookeeper.connect=192.168.153.177:2181,192.168.153.178:2181,192.168.153.177:2181
  • broker.id=#respectively 0 1 2
  • advertised.listeners=PLAINTEXT://(host name kafka01,kafka02,kafk03):9092 #respectively kafka01 02 03
  • zookeeper.connect=192.168.10.130:2181,192.168.10.131:2181,192.168.10.132:2181 #This line has the same content
14. Start Kafka

Perform the same operation on three

[root@kafka01 ~]# /usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties 
[root@kafka01 ~]# ss -nltp|grep 9092
LISTEN     0      50          :::9092                    :::*                   users:(("java",pid=23352,fd=105))
15. Create a topic

Kafka01 host operation

[root@kafka01 ~]# /usr/local/kafka/bin/kafka-topics.sh --create --zookeeper 192.168.153.179:2181 --replication-factor 2 --partitions 3 --topic wg007
Created topic wg007.

Explanation:

  • --Replication-factor 2 (specified number of replicas) high availability
  • -Partitions 3 (specify the number of partitions of the topic) to improve concurrency
  • --Topic wg007 specify a topic
16. Simulated producer

Kafka01 host operation

[root@kafka01 ~]# /usr/local/kafka/bin/kafka-console-producer.sh --broker-list 192.168.153.179:9092 --topic wg007
>
17. Simulated consumers

Kafka02 host operation

[root@kafka02 ~]# /usr/local/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.153.179:9092 --topic wg007 --from-beginning
18. Start the simulation

Enter a on kafka01 to check whether a
kafka01 input appears on kafka02

[root@kafka01 ~]# /usr/local/kafka/bin/kafka-console-producer.sh --broker-list 192.168.153.179:9092 --topic wg007
>a
>

kafka02 view

[root@kafka02 ~]# /usr/local/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.153.179:9092 --topic wg007 --from-beginning
a
19. View the current topic

Kafka01 host operation

[root@kafka01 ~]# /usr/local/kafka/bin/kafka-topics.sh --list --zookeeper 192.168.153.179:2181
__consumer_offsets
wg007
20. Install filebeat (collect logs)

Kafka01 host installation

[root@kafka01 EFK]# pwd
/usr/local/src/EFK
[root@kafka01 EFK]# rpm -ivh filebeat-6.8.12-x86_64.rpm 
21, editor filebeat.yml

Kafka01 host operation
rename the filebeat.yml file name to filebeat.yml.bak
write a filebeat.yml file by yourself

[root@kafka01 filebeat]# pwd
/etc/filebeat
[root@kafka01 filebeat]# mv filebeat.yml filebeat.yml.bak
[root@kafka01 filebeat]# vim filebeat.yml

Configure as follows

[root@localhost filebeat]# pwd
/etc/filebeat
[root@localhost filebeat]# cat filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  fields:
    log_topics: nginx007

output.kafka:
  enabled: true
  hosts: ["192.168.153.179:9092","192.168.153.178:9092","192.168.153.177:9092"]
  topic: nginx007
22, start filebeat

kafka01 operation

[root@kafka01 ~]# systemctl start filebeat
23, install logstash

Kafka02 host operation

[root@kafka02 ELK三剑客]# pwd
/usr/local/src/ELK三剑客
[root@kafka02 ELK三剑客]# rpm -ivh logstash-6.6.0.rpm 
24. Edit /etc/logstash/conf.d/nginx.conf

Kafka02 operation

[root@kafka02 conf.d]# pwd
/etc/logstash/conf.d
[root@kafka02 conf.d]# cat nginx.conf 
input{
    
    
	kafka{
    
    
		bootstrap_servers => ["192.168.153.179:9092,192.168.153.178:9092,192.168.153.177:9092"]
		group_id => "logstash"
		topics => "nginx007"
		consumer_threads => 5
	}

}
filter {
    
     
	json{
    
    
		source => "message"
	}
	
	mutate {
    
    
		remove_field => ["host","prospector","fields","input","log"]
	}
	grok {
    
    
		match => {
    
     "message" => "%{NGX}" }
	}

}

output{
    
    
	elasticsearch {
    
    
		hosts => "192.168.153.179:9200"
		index => "nginx-%{+YYYY.MM.dd}"
	}
	#stdout {
    
    
        #        codec => rubydebug
       #}
}
25. Upload nginx regular related files and file paths and complete the configuration

Kafka02 host operation

[root@kafka02 src]# pwd
/usr/local/src
[root@kafka02 src]# ls
alter  EFK  ELK三剑客  nginx_reguler_log_path.txt  nginx_reguler_log.txt
[root@kafka02 src]# cat nginx_reguler_log_path.txt 
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns
[root@kafka02 src]# mv nginx_reguler_log.txt /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns/nginx
[root@kafka02 src]# cat /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns/nginx
NGX %{
    
    IPORHOST:client_ip} (%{
    
    USER:ident}|- ) (%{
    
    USER:auth}|-) \[%{
    
    HTTPDATE:timestamp}\] "(?:%{WORD:verb} (%{NOTSPACE:request}|-)(?: HTTP/%{NUMBER:http_version})?|-)" %{
    
    NUMBER:status} (?:%{
    
    NUMBER:bytes}|-) "(?:%{URI:referrer}|-)" "%{GREEDYDATA:agent}"
26, start logstash

Kafka02 operation

[root@kafka02 conf.d]# systemctl start logstash
[root@kafka02 conf.d]# ss -nltp|grep 9600
LISTEN     0      50        ::ffff:127.0.0.1:9600                    :::*                   users:(("java",pid=18470,fd=137))
27, install elasticsearch

kafka01 operation

[root@kafka01 ELK三剑客]# pwd
/usr/local/src/ELK三剑客
[root@kafka01 ELK三剑客]# rpm -ivh elasticsearch-6.6.2.rpm
28, modify the elasticsearch configuration file

kafka01 operation

[root@kafka01 ~]# grep -v "#" /etc/elasticsearch/elasticsearch.yml
cluster.name: nginx
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.153.179
http.port: 9200
29, start elasticsearch

kafka01 operation

[root@kafka01 ~]# systemctl start elasticsearch
[root@kafka01 ~]# systemctl enable elasticsearch
Created symlink from /etc/systemd/system/multi-user.target.wants/elasticsearch.service to /usr/lib/systemd/system/elasticsearch.service.
[root@kafka01 ~]# ss -nltp|grep 9200
LISTEN     0      128     ::ffff:192.168.153.179:9200                    :::*                   users:(("java",pid=27812,fd=205))
[root@kafka01 ~]# ss -nltp|grep 9300
LISTEN     0      128     ::ffff:192.168.153.179:9300                    :::*                   users:(("java",pid=27812,fd=191))
30, cheap kibana

kafka03 operation

[root@kafka03 ELK三剑客]# pwd
/usr/local/src/ELK三剑客
[root@kafka03 ELK三剑客]# yum -y install kibana-6.6.2-x86_64.rpm
31. Configure /etc/kibana/kibana.yml

kafka03 operation

[root@kafka03 ~]# grep -Ev '#|^$' /etc/kibana/kibana.yml 
server.port: 5601
server.host: "192.168.153.177"
elasticsearch.hosts: ["http://192.168.153.179:9200"]
  • server.port: 5601
  • #kibanaService Port
  • server.host: “192.168.153.177”
  • #kibanaService host IP
  • elasticsearch.hosts: ["http://192.168.153.179:9200"]
    #elasticsearch service host IP
32, dynamic kibana

kafka03 operation

[root@kafka03 ~]# systemctl start kibana
[root@kafka03 ~]# ss -nltp|grep 5601
LISTEN     0      128    192.168.153.177:5601                     *:*                   users:(("node",pid=16965,fd=18))
33. Install pressure measurement tools and nginx services

kafka01 operation

[root@kafka01 ~]# yum -y install httpd-tools epel-release && yum -y install nginx
34, start nginx and pressure test

kafka01 operation

[root@kafka01 ~]# nginx
[root@kafka01 ~]# ab -n100 -c100 http://192.168.153.179/index.html
35, view the index
kafka01操作
[root@kafka01 ~]# curl -X GET http://192.168.153.179:9200/_cat/indices?v
health status index            uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   nginx-2020.09.20 cBEQUbJxTZCbiLWfJbOc-w   5   1        105            0      169kb          169kb
36, kibana (http://ip:5601) enter the graphical interface operation

View index
Insert picture description here

Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
EFK installation and deployment monitoring nginx logs are now complete!

Guess you like

Origin blog.csdn.net/qq_49296785/article/details/108680226