Deploy ELK log collection system through Docker


Note: The versions of elasticsearch, kibana, logstash, and filebeat must be consistent. elasticsearch-head facilitates viewing the status and index data of elasticsearch through the browser.

Working principle: After collecting logs through filebeat (lightweight data collection engine), push them to Logstash (data collection and processing engine) for filtering, analysis, enrichment, unified format and other operations, and store the processed logs in Elasticsearch (distributed search) Engine), and finally Kibana (visualization platform) searches and displays the indexed data stored in Elasticsearch.

Environmental description:

application Version
docker 19.03.9
elasticsearch 7.17.2
kibana 7.17.2
logstash 7.17.2
filebeat 7.17.2
elasticsearch-head 5.0

1. Image pull

docker pull  docker.elastic.co/elasticsearch/elasticsearch:7.17.2
docker pull  mobz/elasticsearch-head:5                           
docker pull  docker.elastic.co/kibana/kibana:7.17.2             
docker pull  docker.elastic.co/logstash/logstash:7.17.2     
docker pull  docker.elastic.co/beats/filebeat:7.17.2           

2. elasticsearch installation

  1. Prepare elasticsearch.yml file
# 创建文件夹
mkdir /home/southgisdata/elasticsearch
# 创建 elasticsearch.yml 文件
vi  /home/southgisdata/elasticsearch/elasticsearch.yml
------------------------写入---------------------------
network.host: 0.0.0.0 
cluster.name: "docker-cluster" 
http.cors.enabled: true 
http.cors.allow-origin: "*" 
#cluster.initial_master_nodes: ["node-1"] 
xpack: 
 ml.enabled: false
 monitoring.enabled: false 
 security.enabled: false 
 watcher.enabled: false
#去除以下两项的注释,则开启身份认证
#xpack.security.enabled: true
#xpack.security.transport.ssl.enabled: true
------------------------结束---------------------------
  1. Run the elasticsearch container
docker run -d --restart=always -p 9200:9200 -p 9300:9300 \
-e "discovery.type=single-node" \
-v /home/southgisdata/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
--name elasticsearch docker.elastic.co/elasticsearch/elasticsearch:7.17.2
  1. Verify
    browser verification: visit 192.168.1.100:9200
    Insert image description here

Command line verification: curl http://localhost:9200/_nodes/http?pretty
Insert image description here

Note: If the following error occurs! ! !
Insert image description here
The processing method is as follows:

  1. Modify the memory:
    In the /etc/sysctl.conf file, finally add the following content:
vm.max_map_count=262144        
  1. Enable the following configuration of the configuration file elasticsearch.yml:
cluster.initial_master_nodes: ["node-1"]

3. elasticsearch-head installation

  1. Run the elasticsearch-head container
docker run -d --restart=always -p 9100:9100 --name elasticsearch-head mobz/elasticsearch-head:5
  1. Modify vendor.js mainly to enable index data to be displayed on the right side of data browsing
    Insert image description here
# 创建文件夹
mkdir -p /home/southgisdata/elasticsearch-head
# 进入文件夹下
cd /home/southgisdata/elasticsearch-head
# 拷贝 vendor.js 文件到宿主机上
docker cp elasticsearch-head:/usr/src/app/_site/vendor.js ./
# 修改 vendor.js 文件内容
sed -i '/contentType:/s/application\/x-www-form-urlencoded/application\/json;charset=UTF-8/' vendor.js

sed -i '/var inspectData = s.contentType/s/application\/x-www-form-urlencoded/application\/json;charset=UTF-8/' vendor.js
  1. Run the elasticsearch-head container again
# 删除容器
docker rm -f elasticsearch-head
# 运行容器
docker run -d --restart=always -p 9100:9100 -v /home/southgisdata/elasticsearch-head/vendor.js:/usr/src/app/_site/vendor.js --name elasticsearch-head mobz/elasticsearch-head:5
  1. Verification
    **Browser verification:**Visit 192.168.1.100:9100
    Insert image description here

4. logstash installation

  1. Prepare logstash.yml and pipelines.yml configuration files
# 创建文件夹
mkdir -p /home/southgisdata/logstash/config
# 创建 logstash.yml 文件
vi /home/southgisdata/logstash/config/logstash.yml
------------------------写入---------------------------
config:
  reload:
    automatic: true
    interval: 3s
xpack:
  management.enabled: false
  monitoring.enabled: false
------------------------结束---------------------------

# 创建 pipelines.yml 文件
vi /home/southgisdata/logstash/config/pipelines.yml
------------------------写入---------------------------
- pipeline.id: test
  path.config: "/usr/share/logstash/pipeline/logstash-filebeat.conf"
------------------------结束---------------------------
  1. Prepare logstash-filebeat.conf configuration file
# 创建文件夹
mkdir -p /home/southgisdata/logstash/pipeline
# 创建 logstash.yml 文件
vi /home/southgisdata/logstash/pipeline/logstash-filebeat.conf
------------------------写入---------------------------
# 访问端口配置
input {
  beats {
    port => 5044
  }
}

# 过滤条件配置
#filter{
#  grok{
#    match => {
#    "message" => "%{COMBINEDAPACHELOG}"
#   }
#  }
#}

# 输出到ES配置
output {
  elasticsearch {
    hosts => ["http://192.168.1.100:9200"]	            
    index => "nginx_log"
  }
}
------------------------结束---------------------------
  1. Run the logstash container
docker run -d  -p 5044:5044 \
-v /home/southgisdata/logstash/pipeline:/usr/share/logstash/pipeline \
-v /home/southgisdata/logstash/config:/usr/share/logstash/config \
--name logstash docker.elastic.co/logstash/logstash:7.17.2
  1. Verification:
    It takes about two minutes to start up. After that, try to access the web, and you can see the index of nginx_log in the web interface of es-head.
    Insert image description here

5. kibana installation

  1. Prepare kibana.yml configuration file
# 创建文件夹
mkdir -p /home/southgisdata/kibana
# 创建 kibana.yml 配置文件
vi /home/southgisdata/kibana/kibana.yml
------------------------写入---------------------------
server.name: kibana
# 允许所有地址访问
server.host: "0.0.0.0"
# elasticsearch的地址
elasticsearch.hosts: ["http://192.168.1.100:9200/"] 
xpack.monitoring.ui.container.elasticsearch.enabled: true
------------------------写入---------------------------
  1. Run kibana container
docker run -d --restart=always -p 5601:5601 \
-v /home/southgisdata/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml \
--name kibana docker.elastic.co/kibana/kibana:7.17.2
  1. Verification
    Check elasticsearch-head and you will find that there are several more indexes, which are generated by kibanna.
    Insert image description here

6. filebeat installation

  1. Prepare filebeat.yml configuration file
# 创建文件夹
mkdir -p /home/southgisdata/filebeat
# 创建 kibana.yml 配置文件
vi /home/southgisdata/filebeat/filebeat.yml
------------------------写入---------------------------
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/access.log   #日志文件路径

output.logstash:
  hosts: ["192.168.1.100:5044"]  #logstash访问地址
------------------------写入---------------------------
  1. Running the filebeat container
    maps the filebeat configuration file and nginx log, so that when the nginx log changes, the mapped files will also be synchronized.
docker run -d --restart=always --name filebeat \
-v /home/southgisdata/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml \
-v /usr/local/nginx/logs/access.log:/var/log/access.log \
-d docker.elastic.co/beats/filebeat:7.17.2
  1. Verify
    Check the index of nginx_log and you can see the log data of nginx.
    Insert image description here

7. Display the collected data through kibana

Insert image description here
Insert image description here
Insert image description here
Insert image description here
Insert image description here
Insert image description here

Insert image description here

Guess you like

Origin blog.csdn.net/weixin_41166785/article/details/124293408