Logstash的插件

原文链接: http://www.cnblogs.com/linuxboke/p/5689666.html

Logstash的插件:

input插件:

File:从指定的文件中读取事件流;

使用FileWatch(Ruby Gem库)监听文件的变化。

.sincedb:记录了每个被监听的文件的inode, major number, minor nubmer, pos;

一下是一个收集日志简单的示例:

input {

file {

path => ["/var/log/messages"]

type => "system"

start_position => "beginning"

  }

}

output {

stdout {

codec => rubydebug

  }
}

["/var/log/messages"]中可以包含多个文件[item1, item2,...] start_position => "beginning"表示从第一行开始读

udp:通过udp协议从网络连接来读取Message,其必备参数为port,用于指明自己监听的端口,host则用指明自己监听的地址

collectd:性能监控程序,基于c语言开发,以守护进程方式运行,能够收集系统性能各方面的数据,并将收集的结果存储下来,能够通

过network插件,把自己在本机收集到的数据发送给其他主机

collectd的包在epel源中,yum -y install epel-release;yum -y install collectd, collecctd的配置文件为/etc/collectd.conf

vim /etc/collectd.conf,将Global settings for the daemon下的Hostname设置一个名字:Hostname "node1"

找到LoadPlugin section,将LoadPlugin df去掉注释,LoadPlugin network启动

在<Plugin network> </Plugin> 的下面再定义一段:

<Plugin network>

<Server "192.168.204.135" "25826">

</Server>

</Plugin>

表示将数据传给192.168.204.135主机,此主机监听的端口为25826

service collectd start

192.168.204.135安装了lostash,下面是一个UDP的配置文件示例

input {

udp {

port => 25826

codec => collectd {}

type => "collectd"
  }
}

output {

stdout {

codec => rubydebug
  }
}

codec => collectd {} 将collectd发送过来的信息做专门的编码

type => "collectd" 类型可以随意取名

logstash -f /etc/logstash/conf.d/udp.conf --configtest logstash -f /etc/logstash/conf.d/udp.conf

这是就可以收到来之collectd的信息了

redis插件:

从redis读取数据,支持redis channel和lists两种方式

filter插件:

用于在将event通过output发出之前对其实现某些处理功能

grok:用于分析并结构化文本数据;目前 是logstash中将非结构化日志数据转化为结构化的可查询数据的不二之选。

syslog, apache, nginx

模式定义位置:/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-0.3.0/patterns/grok-patterns

语法格式:

%{SYNTAX:SEMANTIC}

SYNTAX:预定义模式名称;

SEMANTIC:匹配到的文本的自定义标识符;

例如:1.1.1.1 GET /index.html 30 0.23

{ "message" => "%{IP:clientip} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %

{NUMBER:duration}" }

vim groksample.conf 一个配置示例

input {

stdin {}
  }

filter {

grok {

match => { "message" => "%{IP:clientip} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %

{NUMBER:duration}" }
  }
}

output {

stdout {

codec => rubydebug
  }
}

logstash -f /etc/logstash/conf.d/groksample.conf --configtest

logstash -f /etc/logstash/conf.d/groksample.conf

输入1.1.1.1 GET /index.html 30 0.23, 得出结果

1.1.1.1 GET /index.html 30 0.23

{

"message" => "1.1.1.1 GET /index.html 30 0.231.1.1.1 GET /index.html 30 0.23",

"@version" => "1",

"@timestamp" => "2016-07-20T11:55:31.944Z",

"host" => "centos7",

"clientip" => "1.1.1.1",

"method" => "GET",

"request" => "/index.html",

"bytes" => "30",

"duration" => "0.231"

}

自定义grok的模式:grok的模式是基于正则表达式编写,其元字符与其它用到正则表达式的工具awk/sed/grep/pcre差别不大

自定义的机会一般不大

匹配apache log示例 vim apachesample.conf

input {

file {

path => ["/var/log/httpd/access_log"]

type => "apachelog"

start_position => "beginning"
  }
}

filter {

grok {

match => { "message" => "%{COMBINEDAPACHELOG}" }

  }
}

output {

stdout {

codec => rubydebug
  }
}

nginx log的匹配方式:

将如下信息添加至 /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-0.3.0/patterns/grok-patterns文

件的尾部

#Nginx log

NGUSERNAME [a-zA-Z\.\@\-\+_%]+

NGUSER %{NGUSERNAME}

NGINXACCESS %{IPORHOST:clientip} - %{NOTSPACE:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%

{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %

{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} %{NOTSPACE:http_x_forwarded_for}

yum -y install epel-release;yum -y install nginx;systemctl start nginx

vim nginxsample.conf

input {

file {

path => ["/var/log/nginx/access.log"]

type => "nginxlog"

start_position => "beginning"
  }
}

filter {

grok {

match => { "message" => "%{NGINXACCESS}" }
  }
}

output {

stdout {

codec => rubydebug
  }
}
logstash -f /etc/logstash/conf.d/nginxsample.conf

转载于:https://www.cnblogs.com/linuxboke/p/5689666.html

猜你喜欢

转载自blog.csdn.net/weixin_30432179/article/details/94787755