ELK Stack部署

部署ELK Stack
官网:https://www.elastic.co

环境准备:
| ip | hostname | 服务 | 用户、组 |
| ------ | ------ | ------ | -------|
| 192.168.20.3 | node2003 | kibana6.5,filebeat | es |
| 192.168.20.4 | node2004 | elasticsearch 6.5,jdk8| es |
| 192.168.20.5 | node2005 | elasticsearch 6.5,jdk8 | es |
| 192.168.20.6 | node2006 | elasticsearch 6.5,jdk8 | es |
| 192.168.20.7 | node2007 | logstash-6.5,jdk8 | es |

安装elasticsearch cluster

node2004:

~]# pwd
/usr/local/pkg/
~]# ll
-rw-r--r-- 1 root root 113320120 Dec 21 05:10 elasticsearch-6.5.2.tar.gz
-rw-r--r-- 1 root root 191753373 Dec 21 05:10 jdk-8u191-linux-x64.tar.gz
~]# tar xf jdk-8u191-linux-x64.tar.gz 
~]# mv jdk1.8.0_191/ jdk8
~]# tar xf elasticsearch-6.5.2.tar.gz 
~]# mv elasticsearch-6.5.2 elasticsearch
~]# cd elasticsearch
~]# mkdir data      //用于存放数据,可挂载一个专门的数据存储
~]# useradd es
~]# chown -R es.es /usr/local/pkg/elasticsearch  /usr/local/pkg/jdk8    //给这两个目录赋权限,程序只使用es用户维护

编辑配置文件

~]# vim config/elasticsearch.yml
  • cluster.name: myes
    确保不要在不同的环境中重用相同的群集名称,否则最终会导致节点加入错误的集群。cluster.name的值来区分不同的集群。
  • node.name: ${HOSTNAME}
    给每个节点设置一个有意义的、清楚的、描述性的名字。
  • node.master: true
    用于指定该节点是否竟争主节点,默认为true。
  • node.data: false
    用于指定节点是否存储数据,默认为true。
  • node.ingest: true
    数据预处理功能。类似于logstash的功能。
  • path.data: /usr/local/pkg/elasticsearch/data
    数据存储目录,建议使用单独挂载的存储。
  • path.logs: /usr/local/pkg/elasticsearch/logs
    日志存储目录
  • bootstrap.memory_lock: true
    禁用swap
  • network.host: 192.168.20.4
    对外网关IP
  • http.port: 9200
    对外数据端口
  • discovery.zen.ping.unicast.hosts: ["192.168.20.4:9300", "192.168.20.5:9300","192.168.20.6:9300"]
    集群内节点的信息,集群内通信使用tcp9300端口。
  • discovery.zen.minimum_master_nodes: 2

    discovery.zen.minimum_master_nodes 设定对你的集群的稳定非常重要。当你的集群中有两个master时,这个配置有助于防止脑裂,一种两个主节点同时存在于一个集群的现象。
    如果你的集群发生了脑裂,那么你的集群就会处在丢失数据的危险中,因为主节点被认为是这个集群的最高统冶者,它决定了什么时候新的索引可以创建,分片是如何移动的等等。如果你有两个master节点,你的数据的完整性将得不到保证,因为你有两个节点认为他们有集群的控制权。
    这个配置就是告诉elasticsearch当没有足够的master候选节点的时候,就不要进行master节点选举,等master候选节点足够了才进行选举。
    此值设置应该被配置为master候选节点的法定个数(大多数个)。法定个数就是 "(master候选节点/2)+1"。

  • 如果你有10个节点(都能保存数据,1个master9个候选master),法定个数就是6
  • 如果你有两个节点,法定个数为2,但是这意味着如果一个节点挂掉,你整个集群就不可用了。设置成1可以保证集群的功能,但是就无法保证集群脑裂。所以,至少保证是奇数个节点。

    注: 这里的选举知识可参考经典的 paxos 协议。

  • gateway.recover_after_nodes: 2
    集群中多少个节点启动后,才允许进行恢复处理。只有在重启时才会使用这里的配置。假如你有3个节点(都存数据),其中一个主节点。这时你需要将集群离线维护。维护好后将3台机器重启,有两台非master机器先启动好了,master机器无法启动。那这时先启动好的两机器等待一个时间周期后,重新选举。选举后新的master发现数据不再均匀分布。因为其中一个机器的数据丢失了,所以他们之间会立即启动分片复制。这里值的设定看个人喜好,如果有几个节点就设几的话,就失去选举功能了,并且一但master宕机,不修复就无法对外提供服务的可能。

  • action.destructive_requires_name: true
    用于删除索引时需要明确给出索引名称,默认为true

这里的配置只是其中一部分,详细配置请参考官方说明:https://www.elastic.co/guide/cn/elasticsearch/guide/current/important-configuration-changes.html

禁用交换:

  • bootstrap.memory_lock: true
    true时,程序尝试将进程地址空间锁定到RAM中,从而防止任何elasticsearch内存被换出。 官方的说法是:交换对性能,节点稳定性非常不利,应该不惜一切代价避免,它可能导致垃圾收集持续数分钟而不是毫秒,并且可能导致节点响应慢甚至断开与集群的连接。

    注:如果尝试分配的内存超过可用内存,可能会导致JVM和shell会话退出!

修改系统变量

文件描述符:编辑/etc/security/limits.conf文件为特定用户设置持久限制。要将用户的最大打开文件数设置elasticsearch为65535,服务启动前添加如下配置:

~]# echo "es    -   nofile  65535" >> /etc/security/limits.conf

虚拟内存:Elasticsearch mmapfs默认使用目录来存储其索引。mmap计数的默认操作系统限制可能太低,这可能导致内存不足异常。
添加如下配置,并重启后生效

~]# echo "vm.max_map_count = 262144" >> /usr/lib/sysctl.d/50-default.conf

线程数:Elasticsearch为不同类型的操作使用了许多线程池。重要的是它能够在需要时创建新线程。确保Elasticsearch用户可以创建的线程数至少为4096。服务启动前添加如下配置:

~]# echo "es    -   nproc   4096" >> /etc/security/limits.conf

启动服务脚本

[Unit]
Description=Elasticsearch
Wants=network-online.target
After=network-online.target

[Service]
Type=simple
PrivateTmp=true
Environment=ES_HOME=/usr/local/pkg/elasticsearch
Environment=ES_PATH_CONF=/usr/local/pkg/elasticsearch/config
Environment=PID_DIR=/var/run/elasticsearch

WorkingDirectory=/usr/local/pkg/elasticsearch

User=es
Group=es

ExecStart=/usr/local/pkg/elasticsearch/bin/elasticsearch 
StandardOutput=journal
StandardError=inherit
# Specifies the maximum file descriptor number that can be opened by this process
LimitNOFILE=65536

# Specifies the maximum number of processes
LimitNPROC=4096

# Specifies the maximum size of virtual memory
LimitAS=infinity

# Specifies the maximum file size
LimitFSIZE=infinity

# Disable timeout logic and wait until process is stopped
TimeoutStopSec=0

# SIGTERM signal is used to stop the Java process
KillSignal=SIGTERM

# Send the signal only to the JVM rather than its control group
KillMode=process

# Java process is never killed
SendSIGKILL=no

# When a JVM receives a SIGTERM signal it exits with code 143
SuccessExitStatus=143

[Install]
WantedBy=multi-user.target

只给elasticsearch专用jdk8,可在启动脚本中添加相应环境变量

~]# vim /usr/local/pkg/elasticsearch/bin/elasticsearch
#!/bin/bash

# CONTROLLING STARTUP:
#
# This script relies on a few environment variables to determine startup
# behavior, those variables are:
#
#   ES_PATH_CONF -- Path to config directory
#   ES_JAVA_OPTS -- External Java Opts on top of the defaults set
#
# Optionally, exact memory values can be set using the `ES_JAVA_OPTS`. Note that
# the Xms and Xmx lines in the JVM options file must be commented out. Example
# values are "512m", and "10g".
#
#   ES_JAVA_OPTS="-Xms8g -Xmx8g" ./bin/elasticsearch
ES_PATH_CONFIG=/usr/local/pkg/elasticsearch/config
export JAVA_HOME=/usr/local/pkg/jdk8
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
//在启动脚本前添加jdk环境变量,这样程序专门指定相应jdk

启动服务:

~]# systemctl daemon-reload 
~]# systemctl start elasticsearch.service
~]# ps -ef | grep ela      //查看进程
root     10806 10789  0 09:54 pts/1    00:00:00 vim bin/elasticsearch
es       11635     1  3 10:35 ?        00:00:26 /usr/local/pkg/jdk8/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch.hTwfw2vY -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=data -XX:ErrorFile=logs/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:logs/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/local/pkg/elasticsearch -Des.path.conf=/usr/local/pkg/elasticsearch/config -Des.distribution.flavor=default -Des.distribution.type=tar -cp /usr/local/pkg/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch
es       11688 11635  0 10:35 ?        00:00:00 /usr/local/pkg/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller
root     12281 12141  0 10:46 pts/4    00:00:00 grep --color=auto ela

查看日志输出情况
~]# tail -f /usr/local/pkg/elasticsearch/logs/myes.log
[2018-12-22T10:19:54,669][INFO ][o.e.e.NodeEnvironment    ] [node2004] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [43.1gb], net total_space [45gb], types [rootfs]
[2018-12-22T10:19:54,672][INFO ][o.e.e.NodeEnvironment    ] [node2004] heap size [1007.3mb], compressed ordinary object pointers [true]
[2018-12-22T10:19:54,673][INFO ][o.e.n.Node               ] [node2004] node name [node2004], node ID [TN_N06ovT8ufWPkOYR0Esg]
[2018-12-22T10:19:54,674][INFO ][o.e.n.Node               ] [node2004] version[6.5.2], pid[10935], build[default/tar/9434bed/2018-11-29T23:58:20.891072Z], OS[Linux/3.10.0-862.el7.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_191/25.191-b12]
[2018-12-22T10:19:54,674][INFO ][o.e.n.Node               ] [node2004] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.z1Ja46qO, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:logs/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.path.home=/usr/local/pkg/elasticsearch, -Des.path.conf=/usr/local/pkg/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=tar]
[2018-12-22T10:19:56,833][INFO ][o.e.p.PluginsService     ] [node2004] loaded module [aggs-matrix-stats]
...
[2018-12-22T10:20:03,620][INFO ][o.e.n.Node               ] [node2004] initialized
[2018-12-22T10:20:03,620][INFO ][o.e.n.Node               ] [node2004] starting ...
[2018-12-22T10:20:03,762][INFO ][o.e.t.TransportService   ] [node2004] publish_address {192.168.20.4:9300}, bound_addresses {192.168.20.4:9300}
[2018-12-22T10:20:03,783][INFO ][o.e.b.BootstrapChecks    ] [node2004] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2018-12-22T10:20:06,829][WARN ][o.e.d.z.ZenDiscovery     ] [node2004] not enough master nodes discovered during pinging (found [[Candidate{node={node2004}{TN_N06ovT8ufWPkOYR0Esg}{SCIJSvtyTQOp9XOfBQiTsw}{192.168.20.4}{192.168.20.4:9300}{ml.machine_memory=3974492160, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2018-12-22T10:20:09,830][WARN ][o.e.d.z.ZenDiscovery     ] [node2004] not enough master nodes discovered during pinging (found [[Candidate{node={node2004}{TN_N06ovT8ufWPkOYR0Esg}{SCIJSvtyTQOp9XOfBQiTsw}{192.168.20.4}{192.168.20.4:9300}{ml.machine_memory=3974492160, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, clusterStateVersion=-1}]], but needed [2]), pinging again
//因为另外两个节点没有启动,所以服务无法发现,也就无法对外提供服务。 

node2005和node2006:
配置:
修改network.host:为本机IP,node.masternode.data都为true。其它配置一样,并启动。

查看nood2005机器的日志:

~]# tail -f /usr/local/pkg/elasticsearch/logs/myes.log
[2018-12-23T22:22:52,880][INFO ][o.e.p.PluginsService     ] [node2005] loaded module [x-pack-monitoring]
[2018-12-23T22:22:52,880][INFO ][o.e.p.PluginsService     ] [node2005] loaded module [x-pack-rollup]
[2018-12-23T22:22:52,880][INFO ][o.e.p.PluginsService     ] [node2005] loaded module [x-pack-security]
[2018-12-23T22:22:52,880][INFO ][o.e.p.PluginsService     ] [node2005] loaded module [x-pack-sql]
[2018-12-23T22:22:52,881][INFO ][o.e.p.PluginsService     ] [node2005] loaded module [x-pack-upgrade]
[2018-12-23T22:22:52,881][INFO ][o.e.p.PluginsService     ] [node2005] loaded module [x-pack-watcher]
[2018-12-23T22:22:52,881][INFO ][o.e.p.PluginsService     ] [node2005] no plugins loaded
[2018-12-23T22:22:57,405][INFO ][o.e.x.s.a.s.FileRolesStore] [node2005] parsed [0] roles from file [/usr/local/pkg/elasticsearch/config/roles.yml]
[2018-12-23T22:22:57,875][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [node2005] [controller/3587] [Main.cc@109] controller (64 bit): Version 6.5.2 (Build 767566e25172d6) Copyright (c) 2018 Elasticsearch BV
[2018-12-23T22:22:58,612][DEBUG][o.e.a.ActionModule       ] [node2005] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2018-12-23T22:22:58,894][INFO ][o.e.d.DiscoveryModule    ] [node2005] using discovery type [zen] and host providers [settings]
[2018-12-23T22:22:59,737][INFO ][o.e.n.Node               ] [node2005] initialized
[2018-12-23T22:22:59,738][INFO ][o.e.n.Node               ] [node2005] starting ...
[2018-12-23T22:22:59,879][INFO ][o.e.t.TransportService   ] [node2005] publish_address {192.168.20.5:9300}, bound_addresses {192.168.20.5:9300}
[2018-12-23T22:22:59,900][INFO ][o.e.b.BootstrapChecks    ] [node2005] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2018-12-23T22:23:03,259][INFO ][o.e.c.s.ClusterApplierService] [node2005] detected_master {node2006}{2iH5BLMbT3eTi6Tm8ysyNg}{zQkibzjOQk-jbe2cZNOiow}{192.168.20.6}{192.168.20.6:9300}{ml.machine_memory=3974492160, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}, added {{node2006}{2iH5BLMbT3eTi6Tm8ysyNg}{zQkibzjOQk-jbe2cZNOiow}{192.168.20.6}{192.168.20.6:9300}{ml.machine_memory=3974492160, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true},{node2004}{TN_N06ovT8ufWPkOYR0Esg}{4RjuzapkTs2Gy5q8bZGIkQ}{192.168.20.4}{192.168.20.4:9300}{ml.machine_memory=3974492160, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true},}, reason: apply cluster state (from master [master {node2006}{2iH5BLMbT3eTi6Tm8ysyNg}{zQkibzjOQk-jbe2cZNOiow}{192.168.20.6}{192.168.20.6:9300}{ml.machine_memory=3974492160, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true} committed version [23]])
//从上一条日志可以得到node2005通过连接查询得到node2006为master,并且与node2005机器进行了友好通信
[2018-12-23T22:23:03,533][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [node2005] Failed to clear cache for realms [[]]
[2018-12-23T22:23:03,537][INFO ][o.e.x.s.a.TokenService   ] [node2005] refresh keys
[2018-12-23T22:23:03,893][INFO ][o.e.x.s.a.TokenService   ] [node2005] refreshed keys
[2018-12-23T22:23:03,934][INFO ][o.e.l.LicenseService     ] [node2005] license [4c39dc4c-1abb-4b60-bcd8-eed218f217b5] mode [basic] - valid
[2018-12-23T22:23:03,966][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [node2005] publish_address {192.168.20.5:9200}, bound_addresses {192.168.20.5:9200}
[2018-12-23T22:23:03,967][INFO ][o.e.n.Node               ] [node2005] started

这时候elasticsearch cluster已经建立OK,接下来就可以安装kibana和logstash以及filebeat。

安装kibana

  1. 配置kibana.yml
//解压
~]# cd /usr/local/pkg/
~]# tar xf kibana-6.5.2-linux-x86_64.tar.gz
~]# mv kibana-6.5.2-linux-x86_64 kibana
~]# cd kibana
~]# vim /etc/profile.d/kibana.sh         //添加环境变量
export PATH=/usr/local/pkg/kibana/bin:$PATH
~]# source /etc/profile.d/kibana.sh
//编辑
~]# vim config/kibana.yml
...

配置文件中重要参数含义:

  • server.port: 5601
    默认端口号:5601
  • server.host: "localhost"
    默认值:"localhost",即指定后端服务器的主机地址
  • elasticsearch.url: "http://192.168.20.4:9200"
    用来处理所有查询的elasticsearch实例的URL。

    这里的elasticsearch.url只能填写一个url,那这里该怎么连接es cluster呢?根据官方可出的解决方法,可以在kibana主机上部署一个elasticsearch只做通信,只需要把node.data:false,node.master:false,node.ingest:false 其它配置一样即可。

  • server.name: "node2003" kibana实例对外展示的名称 * kibana.index: ".kibana"`
    kibana使用elasticsearch中的索引来存储保存的检索,可视化控件以及仪表板。如果没有索引,kibana会创建一个新的索引
  • tilemap.url:
    用来在地图可视化组件中展示地图服务的URL。可使用自己的地图地址。如高德地图URL:'http://webrd02.is.autonavi.com/appmaptile?lang=zh_cn&size=1&scale=1&style=7&x={x}&y={y}&z={z}'
  • elasticsearch.username:elasticsearch.password:
    elasticsearch设置了基本的权限认证,该配置提供了用户名和密码,用于kibana启动时维护索引。
  • server.ssl.enabled: falseserver.ssl.certificate:server.ssl.key:
    对到浏览器端的请求启用SSL,设为true时,server.ssl.certificat和server.ssl.key也要设置。
  • elasticsearch.ssl.certificate:elasticsearch.ssl.key:
    可配置选项,提供PEM格式SSL证书和密钥文件的路径。这些文件确保elasticsearch后端使用同校样的密钥文件。
  • elasticsearch.pingTimeout:
    elasticsearch ping状态的响应时间,判断elasticsearch状态
  • elasticsearch.requestTimeout:
    elasticsearch响应时间,单位毫秒。
  • elasticsearch.shardTimeout :
    等待来自分片的响应时间(以毫秒为单位)。0为禁用
  • elasticsearch.startupTimeout:
    kibana启动时等待elasticsearch的时间。
  • elasticsearch.logQueries: false
    记录发送到时elasticsearch的查询。
  • pid.file: /var/run/kibana.pid
    指定kibana的进程ID文件的路径
  • logging.dest: /usr/local/pkg/kibana/logs/kibana.log
    指定输出的方式。stdout标准输出,也可/path/to/xxx.log输出至文件中
  • logging.silent: false
    是否输出日志,true为不输出日志,false为输出日志
  • logging.quiet: true
    静默模式,禁止除错误外的所有日志输出。
  • logging.verbose: false
    记下的所有事件包括系统使用信息和所有请求的日志。
  • ops.interval
    设置采样系统和流程性能的时间间隔,最小值为100(单位毫秒)
  • `i18n.locale: "zh_CN"
    使用中文输出,可不是汉化,详细信息github上有。

`

  1. 启动服务脚本
~]# cat /usr/lib/systemd/system/kibana.service
[Unit]
Description=Kibana
After=network.target remote-fs.target nss-lookup.target


[Service]
Type=simple
PIDFile=/var/run/kibana.pid
User=es
Group=es
# Load env vars from /etc/default/ and /etc/sysconfig/ if they exist.
# Prefixing the path with '-' makes it try to load, but if the file doesn't
# exist, it continues onward.
#EnvironmentFile=-/usr/local/pkg/kibana/config/kibana
ExecStart=/usr/local/pkg/kibana/bin/kibana serve
Restart=always
WorkingDirectory=/usr/local/pkg/kibana

[Install]
WantedBy=multi-user.target
  1. 启动
kibana]# systemctl daemon-reload 
kibana]# mkdir logs
kibana]# useradd es
kibana]# chown -R es.es /usr/local/pkg/kibana
kibana]# systemctl start kibana.service

//查看日志,如果没有日志则说明正常启动,因为配置文件中禁止正常日志输出了,只允许错误日志输出。测试时建议打开。
kibana]# tail -f /usr/local/pkg/kibana/log/kibana.log
....

安装logstash

https://www.elastic.co/guide/en/logstash/6.5/logstash-settings-file.html

~]# vim 

node.name: test
默认:机器主机名

path.data: /usr/loca/pkg/logstash/data
存储相关数据

pipeline.id: main
管道的ID
pipeline.workers: 2
并行执行的管道数量,默认主机CPU核心数(处理能力)。
pipeline.batch.size: 125

# pipeline.batch.delay: 50
#
# Force Logstash to exit during shutdown even if there are still inflight
# events in memory. By default, logstash will refuse to quit until all
# received events have been pushed to the outputs.
#
# WARNING: enabling this can lead to data loss during shutdown
#
# pipeline.unsafe_shutdown: false
#
# ------------ Pipeline Configuration Settings --------------
#
# Where to fetch the pipeline configuration for the main pipeline
#
# path.config:
#
# Pipeline configuration string for the main pipeline
#
# config.string:
#
# At startup, test if the configuration is valid and exit (dry run)
#
# config.test_and_exit: false
#
# Periodically check if the configuration has changed and reload the pipeline
# This can also be triggered manually through the SIGHUP signal
#
# config.reload.automatic: false
#
# How often to check if the pipeline configuration has changed (in seconds)
#
# config.reload.interval: 3s
#
# Show fully compiled configuration as debug log message
# NOTE: --log.level must be 'debug'
#
# config.debug: false
#
# When enabled, process escaped characters such as \n and \" in strings in the
# pipeline configuration files.
#
# config.support_escapes: false
#
# ------------ Module Settings ---------------
# Define modules here.  Modules definitions must be defined as an array.
# The simple way to see this is to prepend each `name` with a `-`, and keep
# all associated variables under the `name` they are associated with, and
# above the next, like this:
#
# modules:
#   - name: MODULE_NAME
#     var.PLUGINTYPE1.PLUGINNAME1.KEY1: VALUE
#     var.PLUGINTYPE1.PLUGINNAME1.KEY2: VALUE
#     var.PLUGINTYPE2.PLUGINNAME1.KEY1: VALUE
#     var.PLUGINTYPE3.PLUGINNAME3.KEY1: VALUE
#
# Module variable names must be in the format of
#
# var.PLUGIN_TYPE.PLUGIN_NAME.KEY
#
# modules:
#
# ------------ Cloud Settings ---------------
# Define Elastic Cloud settings here.
# Format of cloud.id is a base64 value e.g. dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRub3RhcmVhbCRpZGVudGlmaWVy
# and it may have an label prefix e.g. staging:dXMtZ...
# This will overwrite 'var.elasticsearch.hosts' and 'var.kibana.host'
# cloud.id: <identifier>
#
# Format of cloud.auth is: <user>:<pass>
# This is optional
# If supplied this will overwrite 'var.elasticsearch.username' and 'var.elasticsearch.password'
# If supplied this will overwrite 'var.kibana.username' and 'var.kibana.password'
# cloud.auth: elastic:<password>
#
# ------------ Queuing Settings --------------
#
# Internal queuing model, "memory" for legacy in-memory based queuing and
# "persisted" for disk-based acked queueing. Defaults is memory
#
# queue.type: memory
#
# If using queue.type: persisted, the directory path where the data files will be stored.
# Default is path.data/queue
#
# path.queue:
#
# If using queue.type: persisted, the page data files size. The queue data consists of
# append-only data files separated into pages. Default is 64mb
#
# queue.page_capacity: 64mb
#
# If using queue.type: persisted, the maximum number of unread events in the queue.
# Default is 0 (unlimited)
#
# queue.max_events: 0
#
# If using queue.type: persisted, the total capacity of the queue in number of bytes.
# If you would like more unacked events to be buffered in Logstash, you can increase the
# capacity using this setting. Please make sure your disk drive has capacity greater than
# the size specified here. If both max_bytes and max_events are specified, Logstash will pick
# whichever criteria is reached first
# Default is 1024mb or 1gb
#
# queue.max_bytes: 1024mb
#
# If using queue.type: persisted, the maximum number of acked events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.acks: 1024
#
# If using queue.type: persisted, the maximum number of written events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.writes: 1024
#
# If using queue.type: persisted, the interval in milliseconds when a checkpoint is forced on the head page
# Default is 1000, 0 for no periodic checkpoint.
#
# queue.checkpoint.interval: 1000
#
# ------------ Dead-Letter Queue Settings --------------
# Flag to turn on dead-letter queue.
#
# dead_letter_queue.enable: false
# If using dead_letter_queue.enable: true, the maximum size of each dead letter queue. Entries
# will be dropped if they would increase the size of the dead letter queue beyond this setting.
# Default is 1024mb
# dead_letter_queue.max_bytes: 1024mb

# If using dead_letter_queue.enable: true, the directory path where the data files will be stored.
# Default is path.data/dead_letter_queue
#
# path.dead_letter_queue:
#
# ------------ Metrics Settings --------------
#
# Bind address for the metrics REST endpoint
#
# http.host: "127.0.0.1"
#
# Bind port for the metrics REST endpoint, this option also accept a range
# (9600-9700) and logstash will pick up the first available ports.
#
# http.port: 9600-9700
#
# ------------ Debugging Settings --------------
#
# Options for log.level:
#   * fatal
#   * error
#   * warn
#   * info (default)
#   * debug
#   * trace
#
# log.level: info
# path.logs:
#
# ------------ Other Settings --------------
#
# Where to find custom plugins
# path.plugins: []
#
# ------------ X-Pack Settings (not applicable for OSS build)--------------
#
# X-Pack Monitoring
#
# X-Pack Monitoring
# https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
#xpack.monitoring.enabled: false
#xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: password
#xpack.monitoring.elasticsearch.url: ["https://es1:9200", "https://es2:9200"]
#xpack.monitoring.elasticsearch.ssl.ca: [ "/path/to/ca.crt" ]
#xpack.monitoring.elasticsearch.ssl.truststore.path: path/to/file
#xpack.monitoring.elasticsearch.ssl.truststore.password: password
#xpack.monitoring.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.monitoring.elasticsearch.ssl.keystore.password: password
#xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
#xpack.monitoring.elasticsearch.sniffing: false
#xpack.monitoring.collection.interval: 10s
#xpack.monitoring.collection.pipeline.details.enabled: true
#
# X-Pack Management
# https://www.elastic.co/guide/en/logstash/current/logstash-centralized-pipeline-management.html
#xpack.management.enabled: false
#xpack.management.pipeline.id: ["main", "apache_logs"]
#xpack.management.elasticsearch.username: logstash_admin_user
#xpack.management.elasticsearch.password: password
#xpack.management.elasticsearch.url: ["https://es1:9200", "https://es2:9200"]
#xpack.management.elasticsearch.ssl.ca: [ "/path/to/ca.crt" ]
#xpack.management.elasticsearch.ssl.truststore.path: /path/to/file
#xpack.management.elasticsearch.ssl.truststore.password: password
#xpack.management.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.management.elasticsearch.ssl.keystore.password: password
#xpack.management.elasticsearch.ssl.verification_mode: certificate
#xpack.management.elasticsearch.sniffing: false
#xpack.management.logstash.poll_interval: 5s
~                                                      

input

  • file
file {
    path => ["/var/log/*.log","/xxx/xxx.log"]
    #输入文件路径,数组,支持通配符,绝对路径。
    start_position => "beginning"   
    #开始位置,默认值为ed
    id => "1224"
}

filter
output

  1. 添加启动脚本
[Unit]
Description=logstash
After=network.target remote-fs.target nss-lookup.target


[Service]
Type=simple
PIDFile=/var/run/logstash.pid
User=es
Group=es
ExecStart=/usr/local/pkg/logstash/bin/logstash -f /usr/local/pkg/logstash/conf.d/*.conf
#Restart=always
WorkingDirectory=/usr/local/pkg/logstash

[Install]
WantedBy=multi-user.target

安装filebeat

轻量采集器,将各种所需要的数据采集后发送给各种存储或直接发送给数据处理工具(logstash,es)。占用资源量小。

~]# cd /usr/local/pkg/
~]# tar xf filebeat-6.5.2-linux-x86_64.tar.gz && mv filebeat-6.5.2-linux-x86_64 filebeat
~]# vim filebeat/filebeat.yml   //这里的配置文件就简单说下,具体如何配置以实现自己想要的功能,可参考官网较好。
- type: log
//类型log
  # Change to true to enable this input configuration.
  enabled: true
//打开input配置
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/nginx/access.log
    #- c:\programdata\elasticsearch\logs\*
//读取日志的路径,可使用通配符匹配
....
#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["node2007:5044"]
//输出至logstash上,进行深度处理
....
~]# mkdir filebeat/logs
~]# chown -R es.es /usr/local/pkg/filebeat
  1. 添加启动脚本
~]# vim /usr/lib/systemd/system/filebeat.service
[Unit]
Description=filebeat
After=network.target remote-fs.target nss-lookup.target


[Service]
Type=simple
PIDFile=/usr/local/pkg/filebeat/filebeat.pid
User=es
Group=es
ExecStart=/usr/local/pkg/filebeat/filebeat -c /usr/local/pkg/filebeat/filebeat.yml
#Restart=always
WorkingDirectory=/usr/local/pkg/filebeat

[Install]
WantedBy=multi-user.target
~]# systemctl daemon-reload
~]# systemctl start filebeat

QA

在logstash内部,目录中的所有配置都被编译为单个配置,然后进行评估。由于配置了多个输出,因此将对所有输出执行输出。
6可以用多个pipenline
5建议启多个实例。

猜你喜欢

转载自www.cnblogs.com/dance-walter/p/10159002.html