日志采集搭建(应用日志+filebeat+logstash+elasticsearch+kibana)

一、简介

  在我们应用服务上线后,如果出现什么问题(逻辑问题或者系统报错),我们都需要对日志进行分析。单机系统分析日志还不算特别麻烦,但是当我们面对分布式系统时,日志分散在不同服务器上,日志分析变得尤其困难。此时,我们必须搭建日志采集系统帮助我们收集日志并分析日志。   ELK系统由Elasticsearch、Logstash、Kibana三个组件组成。Logstash负责采集日志,Elasticsearch负责存储日志,Kibana负责可视化展示。但是Logstash比较耗资源,而filebeat比较轻量,所以需要借用filebeat采集日志再转发到logstash。

二、搭建Elasticsearch

  1、拉取docker镜像

docker pull 3laho3y3.mirror.aliyuncs.com/library/elasticsearch:6.7.1

  2、新建es.yml文件和data目录

mkdir data
chmod 777 data
touch es.yml

#es.yml内容
cluster.name: im-elasticsearch-cluster
node.name: es-node1
network.host: 0.0.0.0
network.publish_host: 宿主机ip
http.port: 9700
transport.tcp.port: 9800
http.cors.enabled: true
http.cors.allow-origin: "*"
node.master: true
node.data: true
discovery.zen.ping.unicast.hosts: ["宿主机1ip:9800","宿主机2ip:9800"]
#候选主节点个数/2+1
discovery.zen.minimum_master_nodes: 2

  3、修改系统设置

sysctl -w vm.max_map_count=262144

  4、启动Elasticsearch

docker run -d -e ES_JAVA_OPTS="-Xms256m -Xmx256m" -p 9700:9700 -p 9800:9800 -v /home/im/es/config/es.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /home/im/es/data:/usr/share/elasticsearch/data  --name es01 3laho3y3.mirror.aliyuncs.com/library/elasticsearch:6.7.1

  5、安装分词器

docker exec -it es01 /bin/bash
./bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.7.1/elasticsearch-analysis-ik-6.7.1.zip

  6、重启容器

docker restart es01

  7、查看启动日志

docker logs -f es01

三、搭建Kibana

  1、拉取镜像

docker pull 3laho3y3.mirror.aliyuncs.com/library/kibana:6.7.1

  2、启动Kibana

docker run --name es-kibana -e ELASTICSEARCH_HOSTS=http://es宿主机ip:9700 -e SERVER_PORT=5601  -e SERVER_HOST=0.0.0.0 -p 5601:5601 -d 3laho3y3.mirror.aliyuncs.com/library/kibana:6.7.1

  3、查看Kibana页面

http://kibana宿主机ip:5601/

四、搭建Logstash

  1、拉取镜像

docker pull 3laho3y3.mirror.aliyuncs.com/library/logstash:6.7.1

  2、启动logstash

docker run --name im-logstash -d -p 5044:5044 -v /home/atwa/im/logstash/config:/usr/share/logstash/config 3laho3y3.mirror.aliyuncs.com/library/logstash:6.7.1

  3、编辑配置文件

#新建logstash-chat.conf
input {
  beats {
    port => 5044
  }
}
filter  {
    grok    {
        match => {"message"=>"%{TIMESTAMP_ISO8601:time}\s*%{LOGLEVEL:level}\s*\[%{DATA:thread}\]\s*\[%{DATA:logger}\]\s*%{GREEDYDATA:data}"}
	remove_field => ["message","tags","input","@version","ecs","agent","host","log"]
    }
    date {
        match => ["time", "yyyy-MM-dd HH:mm:ss.SSS"]
        target => "@timestamp"
    }
    json{
	source => "data"	#再进行解析
	remove_field => [ "@alert","alert","time" ]	#删除不必要的字段,也可以不用这语句
    }
    if [typeNum]{
   	mutate{
		remove_field => [ "data" ]
	}
    }else{
	mutate {
        	add_field => { "typeNum" => "100" }
		rename => {"data"=>"logContent"}
      	}
	#drop{}
    }
}
output {
  elasticsearch{
    action => "index"
    hosts => "10.100.100.3:9700"
    index => "chat_websocket_log"
  }
  stdout {
  }
}
#pipelines.yml
- pipeline.id: main
  path.config: "/home/atwa/im/logstash/logstash-6.7.1/config/logstash-chat.conf"
#startup.options
LS_HOME=/home/atwa/im/logstash/logstash-6.7.1
LS_SETTINGS_DIR=/home/atwa/im/logstash/logstash-6.7.1/config

  4、启动

setsid bin/logstash
#查看启动日志

  5、注意事项

(1)logstash-chat.conf中的output里不要有注释,否则容易出现解析问题
(2)logstash-chat.conf的filter的grok匹配的应用日志输出格式为:
     %date{yyyy-MM-dd HH:mm:ss.SSS} %level [%thread][%logger:%line] %msg%n

五、搭建Filebeat

  1、下载并解压filebeat

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.6.1-linux-x86_64.tar.gz

tar -zxvf filebeat-7.6.1-linux-x86_64.tar.gz

  2、修改filebeat.yml

close_inactive: 12h
backoff: 1s
max_backoff: 1s
backoff_factor: 1
flush.timeout: 1s
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /home/atwa/im/jar/websocket/logs/chat.log
#多行日志合并为一行,如果该行日志不以日期开头,则合并到上一条日志
  multiline.pattern: ^[1-9]\d{3}-(0[1-9]|1[0-2])-(0[1-9]|[1-2][0-9]|3[0-1])
  multiline.negate: true
  multiline.match: after
output.logstash:
  # The Logstash hosts
  hosts: ["logstash-ip:5044"]

  3、启动filebeat

setsid ./filebeat -e -c filebeat.yml -d "Publish"

猜你喜欢

转载自www.cnblogs.com/ssl-bl/p/12660181.html