EFK deployment and installation collection log

Environment:
centos7
192.168.59.130:jdk,zookeeper,kafka,filebeat,elasticsearch
192.168.59.131:jdk,zookeeper,kafka,logstash
192.168.59.132:jdk,zookeeper,kafka,kibana
Insert picture description here

1. Basic environment configuration

1: 3 units do time synchronization

ntpdate pool.ntp.org

2: 3 sets off the firewall

systemctl stop firewalld
setenforce  0

3: 3 modified host names

hostnamectl set-hostname kafka1
hostnamectl set-hostname kafka2
hostnamectl set-hostname kafka3

4: Modify the hosts file

vim /etc/hosts
192.168.59.130 kafka1
192.168.59.131 kafka2
192.168.59.132 kafka3

Insert picture description here
5: Install jdk

yum -y install jdk-8u131-linux-x64_.rpm

6: 3 installation zookeeper

tar xzf zookeeper-3.4.14.tar.gz
mv zookeeper-3.4.14 /usr/local/zookeeper
cd /usr/local/zookeeper/conf/
mv zoo_sample.cfg zoo.cfg
编辑zoo.cfg
vim zoo.cfg
server.1=192.168.59.130:2888:3888
server.2=192.168.59.131:2888:3888
server.3=192.168.59.132:2888:3888

Insert picture description here

创建data目录
mkdir /tmp/zookeeper
配置myid
echo "1" > /tmp/zookeeper/myid  #192.168.59.130
echo "2" > /tmp/zookeeper/myid  #192.168.59.131
echo "3" > /tmp/zookeeper/myid  #192.168.59.132

7: Run zookeeper service

/usr/local/zookeeper/bin/zkServer.sh start

Insert picture description here
7.1 View the status of zk

/usr/local/zookeeper/bin/zkServer.sh status

Insert picture description here
8.3 Install Kafka

tar xzf kafka_2.11-2.2.0.tgz
mv kafka_2.11-2.2.0 /usr/local/kafka
vim /usr/local/kafka/config/server.properties

Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
9 start kafka

/usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties
netstat  -lptnu|grep 9092
tcp6       0      0 :::9092                 :::*                    LISTEN      15555/java

10 Create a topic

/usr/local/kafka/bin/kafka-topics.sh --create --zookeeper 192.168.59.130:2181 --replication-factor 2 --partitions 3 --topic wg007
Created topic wg007.

Insert picture description here
10…1 Simulated producer

cd /usr/local/kafka/bin/
./kafka-console-producer.sh --broker-list 192.168.59.130:9092 --topic wg007
>

Insert picture description here
10.2 Simulated Consumer

/usr/local/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.59.130:9092 --topic wg007 --from-beginning

Insert picture description here


10.3 View the current topic

/usr/local/kafka/bin/kafka-topics.sh --list --zookeeper 192.168.59.130:2181
__consumer_offsets
wg007

Insert picture description here
11 Install filebeat (collecting logs)

rpm -ivh filebeat-6.8.12-x86_64.rpm
cd /etc/filebeat/
把原先的配置文件给改名(相当于备份了)
mv filebeat.yml filebeat1.yml
vim filebeat.yml
内容如下:
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/messages

output.kafka:
  enabled: true
  hosts: ["192.168.59.130:9092","192.168.59.131:9092","192.168.59.132:9092"]
  topic: msg

Insert picture description here

开启filebeat服务
systemctl start filebeat
tailf /var/log/filebeat/filebeat

Insert picture description here
11.1 Find a machine and check it

/usr/local/kafka/bin/kafka-topics.sh --list --zookeeper 192.168.59.130:2181

Insert picture description here
Simulate consumers to check and verify that the data does not work

/usr/local/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.59.130:9092 --topic msg --from-beginning

A large number of data is displayed and it is OK. The
Insert picture description here
next step is logstash to collect data.
192.168.59.131 install logstash:

yum -y install logstash-6.6.0.rpm
vim /etc/logstash/conf.d/msg.conf
input{
    
    
        kafka{
    
    
                bootstrap_servers => ["192.168.59.130:9092,192.168.59.131:9092,192.168.59.132:9092"]
                group_id => "logstash"
                topics => "msg"
                consumer_threads => 5
        }
}
output{
    
    
        elasticsearch{
    
    
                hosts => "192.168.59.130:9200"
                index => "msg-%{+YYYY.MM.dd}"
        }
}

Insert picture description here

开启服务
systemctl start logstash
tailf /var/log/logstash/logstash-plain.log
ss -nltp |grep 9600

Insert picture description here
Insert picture description here

192.168.59.130 install elasticsearch

yum -y install elasticsearch-6.6.2.rpm
vim /etc/elasticsearch/elasticsearch.yml
17行
23行
55行
59行
需要修改

Insert picture description here
Verify that the creation is successful

systemctl start elasticsearch
tailf /var/log/elasticsearch/wg007.log

Insert picture description here
Insert picture description here
192.168.59.132 A dress kibana

yum -y install kibana-6.6.2-x86_64.rpm
vim /etc/kibana/kibana.yml
systemctl start kibana

Insert picture description here
Insert picture description here
Browser login 192.168.59.132:5601
Insert picture description here
Insert picture description here
Insert picture description here

Insert picture description here
End

Guess you like

Origin blog.csdn.net/APPLEaaq/article/details/108645941