EFK enterprise log analysis deployment

Preface

1. Introduction to efk

Elasticsearch is a real-time, distributed and scalable search engine that allows full-text, structured search. It is usually used to index and search large amounts of log data, and it can also be used to search many different types of documents.

Beats is a powerful tool for data collection. Put Beats and your container on the server together, or deploy Beats as a function, and then you can centrally process the data in Elastisearch. If you need more powerful processing performance, Beats can also send the data to Logstash for conversion and analysis.

Kibana's core products are equipped with a number of classic functions: bar chart, line chart, pie chart, sunburst chart, and so on. Not only that, you can also use Vega syntax to design your own visualization graphics. All of these utilize the full aggregation capabilities of Elasticsearch.

Elasticsearch is usually deployed together with Kibana. Kibana is a powerful data visualization dashboard of Elasticsearch. Kibana allows you to browse Elasticsearch log data through a web interface.
Insert picture description here

2. Specific arrangement

Insert picture description here

Software version

Insert picture description here

First optimize server settings

将ulimit 值添加到/etc/profile文件中(适用于有root权限登录的系统)
为了每次系统重新启动时,都可以获取更大的ulimit值,将ulimit 加入到/etc/profile 文件底部。
echo ulimit -n 65535 >>/etc/profile
source /etc/profile #加载修改后的profile
ulimit -n #显示65535,修改完毕!
更改主机名,配置域名解析,安装环境编译包,查看Java版本
node1和node2的配置相同,这里就不重复介绍了

Synchronize ahli cloud clock

ntpdate time1.aliyun.com

2.2.1, configure the elasticsearch environment

[root@SERVER 10 ~]# hostnamectl set-hostname node1
[root@SERVER 10 ~]# su
修改hosts
[root@node1 ~]# vim /etc/hosts
192.168.100.10   node2
192.168.100.9   node1
~
[root@node1 ~]# scp /etc/hosts root@192.168.100.10:/etc/hosts

Test node communication

[root@node1 ~]#
[root@node1 ~]# ping node2
PING node2 (192.168.100.10) 56(84) bytes of data.
64 bytes from node2 (192.168.100.11): icmp_seq=1 ttl=64 time=0.753 ms
查看JAVA插件
[root@node1 ~]# java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
关闭防火墙
systemctl stop firewalld
setenforce 0
iptables -F

install software

rpm -ivh elasticsearch-5.5.0.rpm
加载系统服务
[root@node2 opt]# systemctl daemon-reload  
[root@node2 opt]# systemctl enable elasticsearch.service
Created symlink from /etc/systemd/system/multi-user.target.wants/elasticsearch.serviceto /usr/lib/systemd/system/elasticsearch.service.
[root@node2 opt]#

Configuration file

备份
cp /etc/elasticsearch/elasticsearch.yml  /etc/elasticsearch/elasticsearch.yml.bak

修改配置文件
vim /etc/elasticsearch/elasticsearch.yml
[root@node2 opt]# grep -v "^#" /etc/elasticsearch/elasticsearch.yml
cluster.name: sha
node.name: node2
path.data: /path/elk_data
path.logs: /var/log/elasticsearch/
bootstrap.memory_lock: flase
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.unicast.hosts: ["node1", "node2"]
[root@node2 opt]#

[root@node1 opt]# grep -v "^#" /etc/elasticsearch/elasticsearch.yml
cluster.name: sha
node.name: node1
path.data: /path/elk_data
path.logs: /var/log/elasticsearch/
bootstrap.memory_lock: flase
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.unicast.hosts: ["node1", "node2"]
[root@node1 opt]#

Create log file path

[root@node1 opt]# mkdir -p /data/elk_data

Add permission to the group owner

[root@node1 opt]# chown elasticsearch:elasticsearch /data/elk_data/


[root@node2 opt]# systemctl start elasticsearch.service

Load system service

[root@node2 opt]# systemctl daemon-reload  
[root@node2 opt]# systemctl enable elasticsearch.service
Created symlink from /etc/systemd/system/multi-user.target.wants/elasticsearch.serviceto /usr/lib/systemd/system/elasticsearch.service.

[root@server-9 log]# netstat -ntap | grep 9200
tcp6       0      0 :::9200                 :::*                    LISTEN      8720/java
[root@SERVER 10 opt]# netstat -ntap | grep 9200
tcp6       0      0 :::9200                 :::*                    LISTEN      17219/java
[root@SERVER 10 opt]#

2.2.2 Use the host browser to visit: node1 and node2 addresses

访问节点测试,查看参数
#直接访问
192.168.100.9:9200
192.168.100.10:9200
#查看node1和node2的健康信息
192.168.100.9:9200/_cluster/health?pretty
192.168.100.10:9200/_cluster/health?pretty

#查看集群状态信息
192.168.100.9:9200/_cluster/state?pretty
192.168.100.10:9200/_cluster/state?pretty

2.3 Deploy the elasticsearch-head plugin (both node1 and node2 need to be installed)

编译安装node组件依赖包##耗时比较长
yum install gcc gcc-c++ make -y 

[root@node1 elk]# tar xzvf node-v8.2.1.tar.gz -C /opt
[root@node1 elk]# cd /opt/node-v8.2.1/
[root@node1 node-v8.2.1]# ./configure 
[root@node1 node-v8.2.1]# make -j4       //此处根据计算机性能选择
[root@node1 node-v8.2.1]# make install

Install phantomjs (front frame display)

tar xjvf phantomjs-2.1.1-linux-x86_64.tar.bz2 -C /usr/local/src

[root@node1 opt]# cd /usr/local/src
[root@node1 src]# ls
phantomjs-2.1.1-linux-x86_64
[root@node1 src]# cd phantomjs-2.1.1-linux-x86_64/bin
[root@node1 bin]# cp phantomjs /usr/local/bin

Install elasticsearch-head (support plug-in visualization management cluster)

[root@node1 opt]# tar xzvf elasticsearch-head.tar.gz -C /usr/local/src
[root@node1 opt]# cd /usr/local/src/
[root@node1 src]# cd elasticsearch-head/
[root@node1 elasticsearch-head]# npm install

#####Modify the main configuration file###

[root@node2 elasticsearch-head]#  vi /etc/elasticsearch/elasticsearch.yml
   ####下面配置文件,插末尾##
http.cors.enabled: true
http.cors.allow-origin: "*"
[root@node2 elasticsearch-head]# systemctl restart elasticsearch

####Start elasticsearch-head start server####

[root@node2 elasticsearch-head]# cd /usr/local/src/elasticsearch-head/
[root@node2 elasticsearch-head]# npm run start &
[1] 65545
[root@node2 elasticsearch-head]#
> elasticsearch-head@0.0.0 start /usr/local/src/elasticsearch-head
> grunt server

Running "connect:server" (connect) task
Waiting forever...
Started connect web server on http://localhost:9100
[root@node2 elasticsearch-head]# netstat -lnupt | grep 9100
tcp        0      0 0.0.0.0:9100            0.0.0.0:*               LISTEN      65555/grunt
[root@node2 elasticsearch-head]#  netstat -lnupt | grep 9200
tcp6       0      0 :::9200                 :::*                    LISTEN      65453/java

2.4 Access the elasticsearch cluster status on the physical machine

测试9100页面
http://192.168.100.9:9100/ “可以看见群集很健康是绿色,如果是红色就报错了”

进入后连接两台9200的地址
http://192.168.100.10:9100/

创建索引的两种方式
第一种,直接在页面创建
第二种 : 命令创建

[root@node1 elasticsearch-head]# curl -XPUT 'localhost:9200/index-demo/test/1?pretty&pretty' -H 'content-Type: application/json' -d '{"user":"zhangsan","mesg":"hello world"}'{
    
    
  "_index" : "index-demo",
  "_type" : "test",
  "_id" : "1",
  "_version" : 1,
  "result" : "created",
  "_shards" : {
    
    
    "total" : 2,
    "successful" : 2,
    "failed" : 0
  },
  "created" : true
}

2.5 node1 host install kibana

[root@node1 opt]# rpm -ivh kibana-5.5.1-x86_64.rpm
警告:kibana-5.5.1-x86_64.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY
准备中...                          ################################# [100%]
正在升级/安装...
   1:kibana-5.5.1-1                   ################################# [100%]
[root@node1 opt]#  cd /etc/kibana/
[root@node1 kibana]# cp kibana.yml kibana.yml.bak
[root@node1 kibana]#  vi kibana.yml

在物理机上访问elasticsearch集群是否记录这个日志

2.6 Deploy filebeat (deployed on Apache server)

部署apache服务,以及安装filebeat
[root@pc-8 ~]# ntpdate time1.aliyun.com
[root@pc-8 ~]# yum install httpd


[root@pc-8 opt]# ls
filebeat-5.6.3-x86_64.rpm  rh
[root@pc-8 opt]# rpm -ivh filebeat-5.6.3-x86_64.rpm
警告:filebeat-5.6.3-x86_64.rpm: 头V4 RSA/SHA512 Signature,密钥 ID d88e42b4: NOKEY
准备中...                          ################################# [100%]
正在升级/安装...
   1:filebeat-5.6.3-1                 ################################# [100%]

2.6.1 apache log transfer to JASon format

202 行下 插入

LogFormat "{ \
\"@timestamp\": \"%{%Y-%m-%dT%H:%M:%S%z}t\", \
\"@version\": \"1\", \
\"tags\":[\"apache\"], \
\"message\": \"%h %l %u %t \\\"%r\\\" %>s %b\", \
\"clientip\": \"%a\", \
\"duration\": %D, \
\"status\": %>s, \
\"request\": \"%U%q\", \
\"urlpath\": \"%U\", \
\"urlquery\": \"%q\", \
\"bytes\": %B, \
\"method\": \"%m\", \
\"site\": \"%{Host}i\", \
\"referer\": \"%{Referer}i\", \
\"useragent\": \"%{User-agent}i\" \
}" apache_json




2)修改CustomLog "logs/access_log" combined为CustomLog "logs/access_log" apache_json (把原来默认的combined换成apache_json,原文件大概在217行的位置)

将原来的access.log清空,并重启httpd服务

2.6.2 Modify filebeat configuration file

[root@pc-8 filebeat]# vim /etc/filebeat/filebeat.yml

Insert picture description here
Insert picture description here

2.7 Visit 192.168.100.9:9100 to view the index

Insert picture description here
Insert picture description here

2.8 Visit 192.168.100.9:5601 to create index and view log image interface

Insert picture description here

Guess you like

Origin blog.csdn.net/BIGmustang/article/details/108692364