By deploying elasticsearch (three-node) + filebeat + kibana Quick Start EFK, and erected the available demo environment test results
Author: "The coyotes made Britain" , welcome to reprint and submission
table of Contents
▪ use
▪ experimental architecture
▪ EFK software installation
▪ elasticsearch configuration
▪ filebeat configuration
▪ kibana configuration
▪ Start Service
▪ kibana interface configuration
▪ Test
▪ follow-up article
use
▷ filebeat collected in real time nginx access log, transferred to elasticsearch clusters by
the log transfer ▷ filebeat collected to elasticsearch cluster
▷ through kibana show log
Experimental Architecture
▷ server configuration
▷ Chart
EFK software installation
Imprint
Elasticsearch 7.3.2 ▷
▷ Filebeat 7.3.2
▷ Kibana 7.3.2
Precautions
▷ three components must be the same version
▷ elasticsearch be three or more and the total number of singular
installation path
▷ / opt / elasticsearch
▷ / opt / backup beat
▷ / opt / Kibana
Installation elasticsearch : es three identical mounting step are performed
mkdir -p /opt/software && cd /opt/software
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.3.2-linux-x86_64.tar.gz
tar -zxvf elasticsearch-7.3.2-linux-x86_64.tar.gz
mv elasticsearch-7.3.2 /opt/elasticsearch
useradd elasticsearch -d /opt/elasticsearch -s /sbin/nologin
mkdir -p /opt/logs/elasticsearch
chown elasticsearch.elasticsearch /opt/elasticsearch -R
chown elasticsearch.elasticsearch /opt/logs/elasticsearch -R
# 限制一个进程可以拥有的VMA(虚拟内存区域)的数量要超过262144,不然elasticsearch会报max virtual memory areas vm.max_map_count [65535] is too low, increase to at least [262144]
echo "vm.max_map_count = 655350" >> /etc/sysctl.conf
sysctl -p
filebeat installation
mkdir -p /opt/software && cd /opt/software
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.2-linux-x86_64.tar.gz
mkdir -p /opt/logs/filebeat/
tar -zxvf filebeat-7.3.2-linux-x86_64.tar.gz
mv filebeat-7.3.2-linux-x86_64 /opt/filebeat
kibana AnSo
mkdir -p /opt/software && cd /opt/software
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.3.2-linux-x86_64.tar.gz
tar -zxvf kibana-7.3.2-linux-x86_64.tar.gz
mv kibana-7.3.2-linux-x86_64 /opt/kibana
useradd kibana -d /opt/kibana -s /sbin/nologin
chown kibana.kibana /opt/kibana -R
nginx installation (for generating a log is collected filebeat)
# 只在192.168.1.11安装
yum install -y nginx
/usr/sbin/nginx -c /etc/nginx/nginx.conf
elasticsearch Configuration
▷ 192.168.1.31 /opt/elasticsearch/config/elasticsearch.yml
# 集群名字
cluster.name: my-application
# 节点名字
node.name: 192.168.1.31
# 日志位置
path.logs: /opt/logs/elasticsearch
# 本节点访问IP
network.host: 192.168.1.31
# 本节点访问
http.port: 9200
# 节点运输端口
transport.port: 9300
# 集群中其他主机的列表
discovery.seed_hosts: ["192.168.1.31", "192.168.1.32", "192.168.1.33"]
# 首次启动全新的Elasticsearch集群时,在第一次选举中便对其票数进行计数的master节点的集合
cluster.initial_master_nodes: ["192.168.1.31", "192.168.1.32", "192.168.1.33"]
# 启用跨域资源共享
http.cors.enabled: true
http.cors.allow-origin: "*"
# 只要有2台数据或主节点已加入集群,就可以恢复
gateway.recover_after_nodes: 2
▷ 192.168.1.32 /opt/elasticsearch/config/elasticsearch.yml
# 集群名字
cluster.name: my-application
# 节点名字
node.name: 192.168.1.32
# 日志位置
path.logs: /opt/logs/elasticsearch
# 本节点访问IP
network.host: 192.168.1.32
# 本节点访问
http.port: 9200
# 节点运输端口
transport.port: 9300
# 集群中其他主机的列表
discovery.seed_hosts: ["192.168.1.31", "192.168.1.32", "192.168.1.33"]
# 首次启动全新的Elasticsearch集群时,在第一次选举中便对其票数进行计数的master节点的集合
cluster.initial_master_nodes: ["192.168.1.31", "192.168.1.32", "192.168.1.33"]
# 启用跨域资源共享
http.cors.enabled: true
http.cors.allow-origin: "*"
# 只要有2台数据或主节点已加入集群,就可以恢复
gateway.recover_after_nodes: 2
▷ 192.168.1.33 /opt/elasticsearch/config/elasticsearch.yml
# 集群名字
cluster.name: my-application
# 节点名字
node.name: 192.168.1.33
# 日志位置
path.logs: /opt/logs/elasticsearch
# 本节点访问IP
network.host: 192.168.1.33
# 本节点访问
http.port: 9200
# 节点运输端口
transport.port: 9300
# 集群中其他主机的列表
discovery.seed_hosts: ["192.168.1.31", "192.168.1.32", "192.168.1.33"]
# 首次启动全新的Elasticsearch集群时,在第一次选举中便对其票数进行计数的master节点的集合
cluster.initial_master_nodes: ["192.168.1.31", "192.168.1.32", "192.168.1.33"]
# 启用跨域资源共享
http.cors.enabled: true
http.cors.allow-origin: "*"
# 只要有2台数据或主节点已加入集群,就可以恢复
gateway.recover_after_nodes: 2
filebeat Configuration
192.168.1.11 /opt/filebeat/filebeat.yml
# 文件输入
filebeat.inputs:
# 文件输入类型
- type: log
# 开启加载
enabled: true
# 文件位置
paths:
- /var/log/nginx/access.log
# 自定义参数
fields:
type: nginx_access # 类型是nginx_access,和上面fields.type是一致的
# 输出至elasticsearch
output.elasticsearch:
# elasticsearch集群
hosts: ["http://192.168.1.31:9200",
"http://192.168.1.32:9200",
"http://192.168.1.33:9200"]
# 索引配置
indices:
# 索引名
- index: "nginx_access_%{+yyy.MM}"
# 当类型是nginx_access时使用此索引
when.equals:
fields.type: "nginx_access"
# 关闭自带模板
setup.template.enabled: false
# 开启日志记录
logging.to_files: true
# 日志等级
logging.level: info
# 日志文件
logging.files:
# 日志位置
path: /opt/logs/filebeat/
# 日志名字
name: filebeat
# 日志轮转期限,必须要2~1024
keepfiles: 7
# 日志轮转权限
permissions: 0600
kibana arrangement
192.168.1.21 /opt/kibana/config/kibana.yml
# 本节点访问端口
server.port: 5601
# 本节点IP
server.host: "192.168.1.21"
# 本节点名字
server.name: "192.168.1.21"
# elasticsearch集群IP
elasticsearch.hosts: ["http://192.168.1.31:9200",
"http://192.168.1.32:9200",
"http://192.168.1.33:9200"]
Start Service
# elasticsearch启动(3台es均启动)
sudo -u elasticsearch /opt/elasticsearch/bin/elasticsearch
# filebeat启动
/opt/filebeat/filebeat -e -c /opt/filebeat/filebeat.yml -d "publish"
# kibana启动
sudo -u kibana /opt/kibana/bin/kibana -c /opt/kibana/config/kibana.yml
The above method is to start running in the foreground. systemd configuration, will provide "EFK tutorial" series of subsequent articles, so stay tuned!
kibana interface configuration
1️⃣ accessed using a browser 192.168.1.21:5601, see the following interface represents a successful start
2️⃣ 点"Try our sample data"
3️⃣ "Help us improve the Elastic Stack by providing usage statistics for basic features. We will not share this data outside of Elastic"点"no”
4️⃣ "Add Data to kibana"点"Add data"
5️⃣ enter view
test
Access nginx, generate log
curl -I "http://192.168.1.11"
View data on kibana
1️⃣ create an index template
2️⃣ enter the name of the index you want to create a template
CURL before 3️⃣ View data
Subsequent articles
This is the first "EFK Guide" series, EFK follow-up article will be gradually released, includes separation of roles, performance optimization, and many other dry goods, so stay tuned!