Filebeat的Docker安装

Filebeat是一个日志文件托运工具,在你的服务器上安装客户端后,filebeat会监控日志目录或者指定的日志文件,追踪读取这些文件(追踪文件的变化,不停的读),并且转发这些信息到elasticsearch或者logstarsh中存放。

以下是filebeat的工作流程:当你开启filebeat程序的时候,它会启动一个或多个探测器(prospectors)去检测你指定的日志目录或文件,对于探测器找出的每一个日志文件,filebeat启动收割进程(harvester),每一个收割进程读取一个日志文件的新内容,并发送这些新的日志数据到处理程序(spooler),处理程序会集合这些事件,最后filebeat会发送集合的数据到你指定的地点。

可以理解为他是一个轻量级的logstarsh,效率更高。

felibeat官网

https://www.elastic.co/cn/products/beats/filebeat

1.首先选择选择filebeat的docker镜像: 我此处选择的是prima/filebeat

2.创建filebeat配置文件,filebeat.yml,放入到/home/filebeat/文件夹中,filebeat.yml配置文件如下:

###################### Filebeat Configuration Example #########################
 
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https: //www.elastic.co/guide/en/beats/filebeat/index.html
 
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
 
#=========================== Filebeat prospectors =============================
 
filebeat.prospectors:
 
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors  for  various configurations.
# Below are the prospector specific configurations.
 
- type: log
 
   # Change to  true  to enable  this  prospector configuration.
   enabled:  true
 
   # Paths that should be crawled and fetched. Glob based paths.
   #配置filebeat要读取的log文件路径,有多个的话可以使用通配符或者多个paths节点配置
   paths:
     - /home/jenkins/workspace/*/docker/*.log
     #- c:\programdata\elasticsearch\logs\*
 
   # Exclude lines. A list of regular expressions to match. It drops the lines that are
   # matching any regular expression from the list.
   #exclude_lines: [ '^DBG' ]
 
   # Include lines. A list of regular expressions to match. It exports the lines that are
   # matching any regular expression from the list.
   #include_lines: [ '^ERR' '^WARN' ]
 
   # Exclude files. A list of regular expressions to match. Filebeat drops the files that
   # are matching any regular expression from the list. By  default , no files are dropped.
   #exclude_files: [ '.gz$' ]
 
   # Optional additional fields. These fields can be freely picked
   # to add additional information to the crawled log files  for  filtering
   #fields:
   #  level: debug
   #  review:  1
 
   ### Multiline options
 
   # Mutiline can be used  for  log messages spanning multiple lines. This is common
   for  Java Stack Traces or C-Line Continuation
 
   # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
   #multiline.pattern: ^\[
 
   # Defines  if  the pattern set under pattern should be negated or not. Default is  false .
   #multiline.negate:  false
 
   # Match can be set to  "after"  or  "before" . It is used to define  if  lines should be append to a pattern
   # that was (not) matched before or after or as  long  as a pattern is not matched based on negate.
   # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
   #multiline.match: after
 
 
#============================= Filebeat modules ===============================
#modules也没有使用到,暂时不知道怎么使用,本人也注释掉了
#filebeat.config.modules:
   # Glob pattern  for  configuration loading
  # path: ${path.config}/modules.d/*.yml
 
   # Set to  true  to enable config reloading
  # reload.enabled:  false
 
   # Period on which files under path should be checked  for  changes
   #reload.period: 10s
 
#==================== Elasticsearch template setting ==========================
 
#setup.template.settings:
  # index.number_of_shards:  3
   #index.codec: best_compression
   #_source.enabled:  false
 
#================================ General =====================================
 
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web  interface .
#name:
 
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: [ "service-X" "web-tier" ]
 
# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging
 
 
#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by  default  and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled:  false
 
# The URL from where to download the dashboards archive. By  default  this  URL
# has a value which is computed based on the Beat name and version. For released
# versions,  this  URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
 
#============================== Kibana =====================================
 
# Starting with Beats version  6.0 . 0 , the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
 
   # Kibana Host
   # Scheme and port can be left out and will be set to the  default  (http and  5601 )
   # In  case  you specify and additional path, the scheme is required: http: //localhost:5601/path
   # IPv6 addresses should always be defined as: https: //[2001:db8::1]:5601
   #host:  "localhost:5601"
 
#============================= Elastic Cloud ==================================
 
# These settings simplify using filebeat with the Elastic Cloud (https: //cloud.elastic.co/).
 
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
 
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
 
#================================ Outputs =====================================
 
# Configure what output to use when sending the data collected by the beat.
 
#-------------------------- Elasticsearch output ------------------------------
#本人没有使用elasticseach,注释掉了
#output.elasticsearch:
   # Array of hosts to connect to.
  # hosts: [ "192.168.1.33:9200" ]
 
   # Optional protocol and basic auth credentials.
   #protocol:  "https"
   #username:  "elastic"
   #password:  "changeme"
 
#----------------------------- Logstash output --------------------------------
output.logstash:
   # The Logstash hosts,elk服务器logstash开放的地址和端口,嫌使用ssl的方式太麻烦,内网使用,所以ssl的相关配置也注释掉了
   hosts: [ "172.16.20.4:5044" ]
   logging.metrics.enabled:  false
   # Optional SSL. By  default  is off.
   # List of root certificates  for  HTTPS server verifications
   #ssl.certificate_authorities: [ "/etc/pki/root/ca.pem" ]
 
   # Certificate  for  SSL client authentication
   #ssl.certificate:  "/etc/pki/client/cert.pem"
 
   # Client Certificate Key
   #ssl.key:  "/etc/pki/client/cert.key"
 
#================================ Logging =====================================
 
# Sets log level. The  default  log level is info.
# Available log levels are: critical, error, warning, info, debug
#logging.level: debug
 
# At debug level, you can selectively enable logging only  for  some components.
# To enable all selectors use [ "*" ]. Examples of other selectors are  "beat" ,
"publish" "service" .
#logging.selectors: [ "*" ]

filebeat说明:
filebeat.yml 挂载为 filebeat 的配置文件
logs 为 容器挂载日志的目录
registry 读取日志的记录,防止filebeat 容器挂掉,需要重新读取所有日志


3.切换到elk服务器,配置logstash端口接收部分:

 
1)执行以下命令,进入到elk容器内部

docker exec -it elk(本人elk的docker容器名字) /bin/bash


2)修改logstash的input配置文件(也可以通过使用苏追究文件映射的方式来处理)

root @b15eb2b7fdbb :cd /etc/logstash/conf.d
root @b15eb2b7fdbb :vim  02 -beats-input.conf

将文件内容修改成以下内容即可: 

client_inactivity_timeout为客户端连接超时时间,默认是5秒貌似。

input {
   beats {
     port =>  5044
     client_inactivity_timeout =>  36000
   }


4.重启elk


docker restart elk




5.切换到filebeat客户端机器,运行以下命令,启动filebeat docker容器

 
 
docker run -d --name filebeat -v /home/filebeat/filebeat.yml:/filebeat.yml -v /home/jenkins/workspace/:/home/jenkins/workspace/  --net=host prima/filebeat

6.打开kibana管理页面:http://ip:5601 新建filebeat-*

以下为举例:

注意:需要将项目的app.log挂载在docker

version:  "3"   #docker-compose版本,勿修改
services:
   district:  #docker-compose编排名称,一般同微服务名称,注意不要与其他服务重名
     image:  "openjdk:8-jre-alpine"   #docker镜像名
     hostname: district  #docker容器主机名
     container_name: district  #docker容器名
     volumes:  #挂载目录
       - ./app.jar:/app.jar  #要运行的jar包
       - ./app.log:/app.log  #【挂载app.log】
       - ./run.sh:/run.sh  #启动脚本
     ports:  #端口映射
       "9108:9108"
     environment:  #配置环境变量
       - TZ=Asia/Shanghai  #设置时区
     command: sh /run.sh  #设置启动命令
     network_mode: bridge  #网络模式:host、bridge、none等,我们使用bridge
     restart: unless-stopped  #自动启动:unless-stopped、always等,unless-stopped为非正常停止则自动启动


注意:需要将项目的日志打印至app.log

java -jar /app.jar >/app.log

在我们的项目代码中添加打印日志

log.info( "测试日志收集第一行" +depth);
log.error( "test" +depth);

在调用时就可看到

猜你喜欢

转载自blog.csdn.net/qq_34490951/article/details/81032821