0 Preface
The current production system mostly uses the linux system. In the actual production process, in addition to monitoring some business logs, sometimes we also need to monitor the logs of the linux system itself to help us make some troubleshooting and judgments. So in this issue, we will explain the construction of the Linux system log monitoring platform
As in the past, we mainly focus on quick construction for the actual construction tutorial, and do not do too much principle explanation.
1. Download
First of all, for elasticsearch, the download and installation of kibana is no longer exhaustive, and students who don't know yet can look at the previous blog.
ELK construction (1): realize distributed microservice log monitoring
Because my ELK environment is 7.13.0, we need to download the
official download address of the corresponding version of filebeat filebeat
2. filebeat download
filebeat is a lightweight log collector officially provided by elastic. It can be known from the name. It is mainly used for data collection of files. Based on Golang development, it can be installed on the server or host that wants to log to periodically read the corresponding data and send it to elasticsearch or logstash. Even when the amount of data is large, we can also output it to middleware such as kafka and redis.
According to the explanation of the official documentation , filebeat mainly consists of two main components: input and harvesters.
- harvester: The harvester is used to read the contents of a single file line by line. Each file starts a harvester, and the harvester is responsible for opening and closing files. There is also a Registrar component in filebeat to record the offset of the file, that is, the position of the last read. The next time the file is opened, the offset will be read from the Registrar and then continue to read data.
- input: responsible for managing the harvester and finding all files that meet the read conditions. If the input type is log, input will find files on the drive that match the defined path, and will start a harvester for each file.
Next, configure the Output component to output the acquired data.
Filebeat supports data collection of various services, including but not limited to Mysql, MongoDB, Nginx, Redis, ActiveMQ, PostgreSQL, RabbitMQ, Tomcat, etc. More can be viewed in the official documentation
With the out-of-the-box data dashboard provided by kibana, you can quickly build a monitoring platform.
3. Installation
The following installation steps can also be seen on the kibana home page > Add data > System log page
1. Unzip the compressed package (here is an example of the mac version, please download the corresponding system and version of filebeat as needed during actual operation)
tar -zxvf filebeat-7.13.0-darwin-x86_64.tar.gz
2. Modify the filebeat.yml configuration file
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
#
#Set to true to enable config reloading
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
# 因为es是单节点,所以将副本分片设置为0.否则会报黄
index.number_of_replicas: 0
output.elasticsearch:
hosts: ["192.168.244.11:9200"]
username: "elastic"
password: "elastic"
setup.kibana:
host: "192.168.244.11:5601"
3. Enable the system module, if it has been enabled, you don't need to execute it
./filebeat modules enable system
4. Modify the configuration file of the system module
vim modules.d/system.yml
Modify the content. The default configuration is used here to collect system logs and permission logs.
- module: system
# Syslog
syslog:
enabled: true
# 日志路径,为空的话会根据操作系统自动设置
#var.paths:
# Authorization logs
auth:
enabled: true
# 日志路径,为空的话会根据操作系统自动设置
#var.paths:
5. Load the kibana dashboard. If it has already been set, you don't need to set it again
./filebeat setup
6. Start filebeat (execute the following command in a new window)
./filebeat -e
7. After startup, click on the system log page of kibana 检查数据
to see if you have successfully created it. The following prompt appears
, indicating success That's it. In fact, so far, we have also talked about the use of kibana's out-of-the-box data kanban to build our monitoring platform. I believe that everyone has a certain understanding of this.Syslog仪表盘
In fact, you can use the out-of-the-box data kanban provided by kibana to quickly build the monitoring kanban you want