linux下ELK 6.6 版本之Logstash的安装部署

续我的上篇博文:https://mp.csdn.net/postedit/89335965即ElasticSearch多节点(ElasticSearch集群)已经安装部署好。

一、实验环境(rhel7.3版本)

1selinux和firewalld状态为disabled

2各主机信息如下:

主机 ip
server1(es节点1),内存至少2G 172.25.83.1
server2(es节点2、Logstash),内存至少2G 172.25.83.2
server1(es节点3),内存至少2G 172.25.83.3

二、Logstash的安装部署

1、下载Logstash所需的软件包(logstash-6.6.1.rpm),并进行安装

[root@server2 ~]# ls
logstash-6.6.1.rpm
[root@server2 ~]# rpm -ivh logstash-6.6.1.rpm

2、测试:

测试一:交互界面:标准输入,标准输出

[root@server2 logstash]# pwd
/usr/share/logstash
[root@server2 logstash]# bin/logstash -e 'input { stdin{ } } output { stdout {} }'   #标准输入,标准输出
[INFO ] 2019-04-16 18:17:52.419 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
hello   #输入的内容
{
    "@timestamp" => 2019-04-16T10:18:03.042Z,
          "host" => "server2",
       "message" => "hello",
      "@version" => "1"
}
world   #输入的内容
{
    "@timestamp" => 2019-04-16T10:18:05.763Z,
          "host" => "server2",
       "message" => "world",
      "@version" => "1"
#按"Ctrl+c"退出

测试二:交互界面:标准输入,输出到elasticsearch主机上

[root@server2 conf.d]# pwd
/etc/logstash/conf.d
[root@server2 conf.d]# ls
[root@server2 conf.d]# vim logstash.conf   #该文件的名字随意给,但是必须以.conf结尾
  1 input {
  2         stdin {}
  3 }
  4 
  5 output {
  6         elasticsearch {
  7                 hosts => "172.25.83.1:9200"
  8                 index => "logstash-%{+YYY.MM.dd}"
  9         }
 10 }


[root@server2 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf 
[INFO ] 2019-04-16 18:28:24.635 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
hello world   #输入的内容
您好,世界   #输入的内容
#按"Ctrl+c"推出

刷新web界面,可以看到新增加了logstash...的索引

点击“数据浏览—>logstash...”,来查看该索引中存入的内容(之前编写的hello  world和您好,世界)

测试三:非交互界面:非标准输入(server2端的elasticsearch的日志信息),标准输出,同时输出到elasticsearch主机上

[root@server2 conf.d]# pwd
/etc/logstash/conf.d
[root@server2 conf.d]# vim es.conf 
  1 input {
  2         file {
  3                 path => "/var/log/elasticsearch/my-es.log"
  4                 start_position => "beginning"
  5         }
  6 }
  7 
  8 output {
  9         stdout {}
 10 
 11         elasticsearch {
 12                 hosts => "172.25.83.1:9200"
 13                 index => "es-%{+YYY.MM.dd}"
 14         }
 15 }

[root@server2 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/es.conf
[INFO ] 2019-04-16 19:54:41.495 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
{
    "@timestamp" => 2019-04-16T11:54:41.921Z,
      "@version" => "1",
          "host" => "server2",
          "path" => "/var/log/elasticsearch/my-es.log",
       "message" => "[2019-04-16T16:58:02,422][INFO ][o.e.e.NodeEnvironment    ] [server2] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [1.6gb], net total_space [3.4gb], types [rootfs]"
}
...一系列的输出信息
#按"Crtl+c"退出

刷新web界面,可以看到新增加了es...的索引

点击“数据浏览—>es...”,来查看该索引中存入的内容(server2端my-es集群的日志信息)

下面我们测试:如果浏览器中的索引不小心删除了,该如何恢复?

1、此时,我们删除浏览器生成的索引es-...

点击“删除”

根据提示,我们写入“删除”

点击“OK”

点击“OK”

我们可以看到es-...这个索引已经成功被删除。

2、我们再次运行logstash服务,来生成es-...索引,看看是否能够成功生成

[root@server2 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/es.conf   #logstash启动成功之后,我们没有看到有任何的日志信息的输出  
[INFO ] 2019-04-17 16:06:28.264 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
#按"Ctrl+c"退出
  • 同时在浏览器也没有看到生成相应的索引es...

这是为什么呢?这是因为/var/log/elasticsearch/my-es.log文件中的内容并没有发生任何改变,所以不会生成一个新的es...的索引。那原来的索引该如何恢复呢?

经过分析,当.conf文件中的输入模块中指定了相应的file模块时,当运行logstash服务生成对应的索引时,都会在/usr/share/logstash/data/plugins/inputs/file目录下生成关于该索引的相应的隐藏文件

3、删除生成的关于/var/log/elasticsearch的隐藏文件,并重新运行logstash服务恢复原来的索引。

/usr/share/logstash/data/plugins/inputs/file
[root@server2 file]# ls
[root@server2 file]# l.
.  ..  .sincedb_d5a86a03368aaadc80f9eeaddba3a9f5
[root@server2 file]# cat .sincedb_d5a86a03368aaadc80f9eeaddba3a9f5 
920358 0 64768 31348 1555492268.0859401 /var/log/elasticsearch/my-es.log
[root@server2 file]# ls -i /var/log/elasticsearch/my-es.log
920358 /var/log/elasticsearch/my-es.log


[root@server2 file]# rm -rf .sincedb_d5a86a03368aaadc80f9eeaddba3a9f5 
[root@server2 file]# l.
.  ..



[root@server2 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/es.conf
[INFO ] 2019-04-17 16:19:00.986 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
{
      "@version" => "1",
       "message" => "[2019-04-17T15:43:53,850][INFO ][o.e.e.NodeEnvironment    ] [server2] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [1.2gb], net total_space [3.4gb], types [rootfs]",
          "host" => "server2",
          "path" => "/var/log/elasticsearch/my-es.log",
    "@timestamp" => 2019-04-17T08:19:01.800Z
}
...一系列的输出信息
#按"Crtl+c"退出

刷新web界面,可以看到新增加了es...的索引

点击“数据浏览—>es...”,来查看该索引中存入的内容(server2端my-es集群的日志信息)

可以看到与之前删除的es-...的索引中存放的内容一致。即表示恢复成功。

测试四:非交互界面:非标准输入(自server2端的logstash服务开启之后,server1端的所有日志信息),标准输出,同时输出到elasticsearch主机上

#配置server1端
[root@server1 ~]# vim /etc/rsyslog.conf
 93 *.*     @@172.25.83.2:514   #server1端所有类型的所有日志都发送到172.25.83.2主机的514端口上。@@表示通过TCP协议进行发送;如果是一个@则表示通过UDP协议进行发送
[root@server1 ~]# systemctl restart rsyslog.service   #修改完配置文件之后,重启rsyslog服务



#配置serer2端
[root@server2 conf.d]# pwd
/etc/logstash/conf.d
[root@server2 conf.d]# pwd
/etc/logstash/conf.d
[root@server2 conf.d]# vim syslog.conf
  1 input {
  2         syslog {
  3                 port => 514
  4         }
  5 }
  6 
  7 output {
  8         stdout {}
  9 
 10         elasticsearch {
 11                 hosts => "172.25.83.1:9200"
 12                 index => "syslog-%{+YYY.MM.dd}"
 13         }
 14 }


[root@server2 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/syslog.conf
[INFO ] 2019-04-16 19:54:41.495 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
#此时并没有任何输出,因为自server2端的logstash服务开启之后,server1端并没有生成任何日志信息
##之前运行logstash服务的server2的终端不要用"Ctrl+c"退出
##重新打开一个新的server2终端,查看server2端的514端口是否已经成功打开
[root@server2 ~]# netstat -antulpe | grep :514
tcp6       0      0 :::514                  :::*                    LISTEN      0          26221      2868/java           
udp        0      0 0.0.0.0:514             0.0.0.0:*                           0          26220      2868/java 
##此时在server1端手动使其生成日志
[root@server1 ~]# logger hello
[root@server1 ~]# logger world
##此时,在server2终端上可以看到有日志信息输出
{
           "message" => "hello\n",
        "@timestamp" => 2019-04-17T08:42:18.000Z,
          "priority" => 13,
          "@version" => "1",
    "severity_label" => "Notice",
          "facility" => 1,
         "logsource" => "server1",
    "facility_label" => "user-level",
           "program" => "root",
         "timestamp" => "Apr 17 16:42:18",
              "host" => "172.25.83.1",
          "severity" => 5
}
{
           "message" => "world\n",
        "@timestamp" => 2019-04-17T08:42:21.000Z,
          "priority" => 13,
          "@version" => "1",
    "severity_label" => "Notice",
          "facility" => 1,
         "logsource" => "server1",
    "facility_label" => "user-level",
           "program" => "root",
         "timestamp" => "Apr 17 16:42:21",
              "host" => "172.25.83.1",
          "severity" => 5
}

刷新web界面,可以看到新增加了syslog...的索引

点击“数据浏览—>syslog...”,来查看该索引中存入的内容(server1端的日志信息(hello   world))

测试五:交互界面:标准输入(输入多行),标准输出(将输入的多行内容合并为一行进行输出)

[root@server2 conf.d]# pwd
/etc/logstash/conf.d
[root@server2 conf.d]# vim test.conf   #下面的内容表示遇到"EOF"就往上合并,即将EOF之前的内容输出到一行
  1 input {
  2         stdin {
  3                 codec => multiline {
  4                         pattern => "EOF"
  5                         negate => true
  6                         what => "previous"
  7                 }
  8         }
  9 }
 10 
 11 output {
 12         stdout {}
 13 }


[root@server2 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf
[INFO ] 2019-04-17 17:00:43.729 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
1   #输入的内容
2   #输入的内容
3   #输入的内容
4   #输入的内容
5   #输入的内容
EOF
{
    "@timestamp" => 2019-04-17T09:01:01.114Z,
          "host" => "server2",
      "@version" => "1",
          "tags" => [
        [0] "multiline"
    ],
       "message" => "1\n2\n3\n4\n5"
}
#按"Crtl+c"退出

测试六:非交互界面:非标准输入(server2端的elasticsearch的日志信息),标准输出,同时输出到elasticsearch主机,其中输出内容,依据设定的条件,将多行内容输出到一行。

通过观察测试三生成的es-...的索引内容,我们发现,有些内容本该是一个日志信息中的内容,却输出到了多行,显然这是不合适的。如下图的at信息。

下面我们删除之前生成的es-...索引,重新编写es.conf文件(使其本该是一行的日志信息输出到一行),再次运行logstash服务,生成新的es-...的索引。

1、删除之前生成的es-...的索引

web界面的删除,这里不再演示,直接看删除es-...索引之后的图片

删除生成的隐藏文件

[root@server2 file]# pwd
/usr/share/logstash/data/plugins/inputs/file
[root@server2 file]# l.
.  ..  .sincedb_d5a86a03368aaadc80f9eeaddba3a9f5
[root@server2 file]# cat .sincedb_d5a86a03368aaadc80f9eeaddba3a9f5 
920358 0 64768 31348 1555492268.0859401 /var/log/elasticsearch/my-es.log
[root@server2 file]# rm -rf .sincedb_d5a86a03368aaadc80f9eeaddba3a9f5 
[root@server2 file]# l.
.  ..

2、编写新的es.conf文件

input {
        file {
                path => "/var/log/elasticsearch/my-es.log"
                start_position => "beginning"
                codec => multiline {
                        pattern => "^\["   #遇到以[开头的行,就将内容往上合并
                        negate => true
                        what => "previous"
                }

        }
}

output {
        stdout {}

        elasticsearch {
                hosts => "172.25.83.1:9200"
                index => "es-%{+YYY.MM.dd}"
        }
}



[root@server2 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/es.conf 
[INFO ] 2019-04-17 18:12:39.092 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
...一系列的输出信息
#按"Ctrl+c"退出

刷新web界面,可以看到新增加了es...的索引

点击“数据浏览—>es...”,来查看该索引中存入的内容

点击第三行,查看第三行内容的详细信息

从上图我们可以看出,将之前本应在一行的却不在一行的关于at的内容,放在了一行输出。

测试六:交互界面:标准输入,并将标准输入的内容进行过滤,过滤后的内容进行标准输出

[root@server2 conf.d]# pwd
/etc/logstash/conf.d
[root@server2 conf.d]# vim separate.conf
  1 input {
  2         stdin {}
  3 }
  4 
  5 filter {
  6         grok {
  7                 match => { "message" => "%{IP:client} %{WORD:method} %{URIPATH    PARAM:request} %{NUMBER:bytes} %{NUMBER:duration}"  }
  8         }
  9 }
 10 
 11 output {
 12         stdout {}
 13 }

[root@server2 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/separate.conf 
[INFO ] 2019-04-17 18:33:44.548 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
55.3.244.1 GET /index.html 15824 0.043   #输入的内容
{
    "@timestamp" => 2019-04-17T10:35:14.998Z,
       "message" => "55.3.244.1 GET /index.html 15824 0.043",
        "client" => "55.3.244.1",
      "@version" => "1",
         "bytes" => "15824",
      "duration" => "0.043",
          "host" => "server2",
       "request" => "/index.html",
        "method" => "GET"
}

我们从输出的内容可以看到,输入的内容经过处理之后,被分解成了更直观的内容,即实现了我们想要的结果。

测试七:非交互界面:非标准输入(server2端的httpd服务的日志信息),并将输入的内容进行过滤,过滤后的内容输出到elasticsearch主机,其中输出内容,依据设定的条件,将多行内容输出到一行。

从上面的索引内容中,我们可以看到,这些日志信息并没有经过处理,即并不能直观的反映出信息。因此需要进行过滤。

[root@server2 ~]# yum install httpd -y
[root@server2 ~]# vim /var/www/html/index.html   #编辑httpd服务的默认发布页
www.xin.com
[root@server2 ~]# systemctl start httpd   #启动httpd服务




[root@foundation83 images]# ab -c 1 -n 100 http://172.25.83.2/index.html   #在物理机访问172.25.83.2的默认发布页(index.html)。100次请求,1次并发。
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 172.25.83.2 (be patient).....done


Server Software:        Apache/2.4.6
Server Hostname:        172.25.83.2
Server Port:            80

Document Path:          /index.html
Document Length:        12 bytes

Concurrency Level:      1
Time taken for tests:   0.031 seconds
Complete requests:      100
Failed requests:        0
Write errors:           0
Total transferred:      28900 bytes
HTML transferred:       1200 bytes
Requests per second:    3207.08 [#/sec] (mean)
Time per request:       0.312 [ms] (mean)
Time per request:       0.312 [ms] (mean, across all concurrent requests)
Transfer rate:          905.12 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.1      0       1
Processing:     0    0   0.2      0       1
Waiting:        0    0   0.1      0       1
Total:          0    0   0.2      0       1

Percentage of the requests served within a certain time (ms)
  50%      0
  66%      0
  75%      0
  80%      0
  90%      0
  95%      1
  98%      1
  99%      1
 100%      1 (longest request)
[root@server2 ~]# ll -d /var/log/httpd/access_log 
-rw-r--r-- 1 root root 10200 Apr 17 21:30 /var/log/httpd/access_log
[root@server2 ~]# chmod 755 /var/log/httpd/access_log   #使得httpd服务的日志目录(/var/log/httpd/access_log对其他用户具有可读的权限(这里针对的主要是elasticsearch用户))
[root@server2 ~]# ll -d /var/log/httpd/access_log 
-rwxr-xr-x 1 root root 10200 Apr 17 21:30 /var/log/httpd/access_log
[root@server2 ~]# cat /var/log/httpd/access_log   #只摘取了其中一部分
172.25.83.83 - - [17/Apr/2019:21:30:38 +0800] "GET /index.html HTTP/1.0" 200 12 "-" "ApacheBench/2.3"
172.25.83.83 - - [17/Apr/2019:21:30:38 +0800] "GET /index.html HTTP/1.0" 200 12 "-" "ApacheBench/2.3"
172.25.83.83 - - [17/Apr/2019:21:30:38 +0800] "GET /index.html HTTP/1.0" 200 12 "-" "ApacheBench/2.3"
172.25.83.83 - - [17/Apr/2019:21:30:38 +0800] "GET /index.html HTTP/1.0" 200 12 "-" "ApacheBench/2.3"
[root@server2 conf.d]# pwd
/etc/logstash/conf.d
[root@server2 conf.d]# vim http.conf
  1 input {
  2         file {
  3                 path => "/var/log/httpd/access_log"
  4                 start_position => "beginning"
  5         }
  6 }
  7 
  8 filter {
  9         grok {
 10                 match => { "message" => "%{HTTPD_COMBINEDLOG}" }   #来自于模板中的变量定义
 11         }
 12 }
 13 
 14 output {
 15         elasticsearch {
 16                 hosts => "172.25.83.1:9200"
 17                 index => "http-%{+YYY.MM.dd}"
 18         }
 19 }


[root@server2 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/http.conf
[INFO ] 2019-04-17 21:42:25.421 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
#按"Ctrl+c"退出
  • 变量(HTTPD_COMBINEDLOG)来源于下面的文件
[root@server2 patterns]# pwd
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns
[root@server2 patterns]# ls
aws     bro        grok-patterns  java          maven                 mongodb     rails  squid
bacula  exim       haproxy        junos         mcollective           nagios      redis
bind    firewalls  httpd          linux-syslog  mcollective-patterns  postgresql  ruby


[root@server2 patterns]# vim httd
  1 HTTPDUSER %{EMAILADDRESS}|%{USER}
  2 HTTPDERROR_DATE %{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{YEAR}
  3 
  4 # Log formats
  5 HTTPD_COMMONLOG %{IPORHOST:clientip} %{HTTPDUSER:ident} %{HTTPDUSER:auth} \[%{HTTPDATE:timestamp}\    ] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMB    ER:response} (?:%{NUMBER:bytes}|-)
  6 HTTPD_COMBINEDLOG %{HTTPD_COMMONLOG} %{QS:referrer} %{QS:agent}   #正是来源于这行中变量的定义
  7 
  8 # Error logs
  9 HTTPD20_ERRORLOG \[%{HTTPDERROR_DATE:timestamp}\] \[%{LOGLEVEL:loglevel}\] (?:\[client %{IPORHOST:    clientip}\] ){0,1}%{GREEDYDATA:message}
 10 HTTPD24_ERRORLOG \[%{HTTPDERROR_DATE:timestamp}\] \[%{WORD:module}:%{LOGLEVEL:loglevel}\] \[pid %{    POSINT:pid}(:tid %{NUMBER:tid})?\]( \(%{POSINT:proxy_errorcode}\)%{DATA:proxy_message}:)?( \[clien    t %{IPORHOST:clientip}:%{POSINT:clientport}\])?( %{DATA:errorcode}:)? %{GREEDYDATA:message}
 11 HTTPD_ERRORLOG %{HTTPD20_ERRORLOG}|%{HTTPD24_ERRORLOG}
 12 
 13 # Deprecated
 14 COMMONAPACHELOG %{HTTPD_COMMONLOG}
 15 COMBINEDAPACHELOG %{HTTPD_COMBINEDLOG}

刷新web界面,可以看到新增加了http...的索引

点击“数据浏览—>http...”,来查看该索引中存入的内容

点击任意一行,查看该行内容的详细信息

我们从输出的内容可以看到,输入的内容经过处理之后,被分解成了更直观的内容,即实现了我们想要的结果。

测试八:非交互界面:非标准输入(server2端的nginx服务的日志信息),并将输入的内容进行过滤,过滤后的内容输出到elasticsearch主机,其中输出内容,依据设定的条件,将多行内容输出到一行。

该内容的编写与测试七中http.conf文件的编写基本一致,只需将其中的path改为/usr/localnginx/logs/access.log即可(其中/usr/local/nginx是nginx服务的解压目录)。这是因为nginx服务的日志/usr/local/nginx/logs/access.log和httpd服务的日志/var/log/httpd/access_log中的内容格式完全一样。编辑内容如下:

[root@server2 conf.d]# pwd  
/etc/logstash/conf.d
[root@server2 conf.d]# vim nginx.conf
  1 input {
  2         file {
  3                 path => "/usr/local/nginx/logs/access.log"
  4                 start_position => "beginning"
  5         }
  6 }
  7 
  8 filter {
  9         grok {
 10                 match => { "message" => "%{HTTPD_COMBINEDLOG}" }   #来自于模板中的变量定义
 11         }
 12 }
 13 
 14 output {
 15         elasticsearch {
 16                 hosts => "172.25.83.1:9200"
 17                 index => "http-%{+YYY.MM.dd}"
 18         }
 19 }

除非nginx用来作负载均衡,nginx用来做反向代理,生成的日志与httpd服务生成的日志可不是一模一样的。此时应该怎么办呢?

方法一:在上面编辑的.conf文件中,添加一点内容即可。

[root@server2 conf.d]# pwd  
/etc/logstash/conf.d
[root@server2 conf.d]# vim nginx.conf  
  1 input {
  2         file {
  3                 path => "/var/log/httpd/access_log"
  4                 start_position => "beginning"
  5         }
  6 }
  7 
  8 filter {
  9         grok {
 10                 match => { "message" => "%{HTTPD_COMBINEDLOG} %{QS:x forward for}" }   #其中%{QS:x forward for}是新添加的内容
 11         }
 12 }
 13 
 14 output {
 15         elasticsearch {
 16                 hosts => "172.25.83.1:9200"
 17                 index => "http-%{+YYY.MM.dd}"
 18         }
 19 }

方法二:在模板文件/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns/httpd文件中,添加一行内容(重新定义一个新的变量)即可。.conf文件中的内容将原来的变量TTPD_COMBINEDLOG替换为新定义的变量。

[root@server2 patterns]# pwd
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns
[root@server2 patterns]# vim httpd 
  1 HTTPDUSER %{EMAILADDRESS}|%{USER}
  2 HTTPDERROR_DATE %{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{YEAR}
  3 
  4 # Log formats
  5 HTTPD_COMMONLOG %{IPORHOST:clientip} %{HTTPDUSER:ident} %{HTTPDUSER:auth} \[%{HTTPDATE:timestamp}\    ] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMB    ER:response} (?:%{NUMBER:bytes}|-)
  6 HTTPD_COMBINEDLOG %{HTTPD_COMMONLOG} %{QS:referrer} %{QS:agent}
  7 
  8 NGINXACCESSLOG %{HTTPD_COMBINEDLOG} %{QS:x forward for}   #新添加的内容
  9 
 10 # Error logs
 11 HTTPD20_ERRORLOG \[%{HTTPDERROR_DATE:timestamp}\] \[%{LOGLEVEL:loglevel}\] (?:\[client %{IPORHOST:    clientip}\] ){0,1}%{GREEDYDATA:message}
 12 HTTPD24_ERRORLOG \[%{HTTPDERROR_DATE:timestamp}\] \[%{WORD:module}:%{LOGLEVEL:loglevel}\] \[pid %{    POSINT:pid}(:tid %{NUMBER:tid})?\]( \(%{POSINT:proxy_errorcode}\)%{DATA:proxy_message}:)?( \[clien    t %{IPORHOST:clientip}:%{POSINT:clientport}\])?( %{DATA:errorcode}:)? %{GREEDYDATA:message}
 13 HTTPD_ERRORLOG %{HTTPD20_ERRORLOG}|%{HTTPD24_ERRORLOG}
 14 
 15 # Deprecated
 16 COMMONAPACHELOG %{HTTPD_COMMONLOG}
 17 COMBINEDAPACHELOG %{HTTPD_COMBINEDLOG}




[root@server2 conf.d]# pwd  
/etc/logstash/conf.d
[root@server2 conf.d]# vim nginx.conf
  1 input {
  2         file {
  3                 path => "/usr/local/nginx/logs/access.log"
  4                 start_position => "beginning"
  5         }
  6 }
  7 
  8 filter {
  9         grok {
 10                 match => { "message" => "%{NGINXACCESSLOG}" }   #变量NGINXACCESSLOG是修改的内容
 11         }
 12 }
 13 
 14 output {
 15         elasticsearch {
 16                 hosts => "172.25.83.1:9200"
 17                 index => "http-%{+YYY.MM.dd}"
 18         }
 19 }

猜你喜欢

转载自blog.csdn.net/qq_42303254/article/details/89338641
今日推荐