12、elk的使用(2)

12.8、收集日志:

因为logstash安装在从节点上,所以这里收集的主要是从节点上的服务日志;

1、收集系统日志:

(1)配置文件:

vim /etc/logstash/conf.d/system-log.conf

input {

file {

path => "/var/log/messages"

type => "system_log"

start_position => "beginning"

stat_interval => "2"

}

file {

path => "/var/log/lastlog"

type => "system_last_log"

start_position => "beginning"

stat_interval => "2"

}

}


output {

if [type] == "system_log" {

elasticsearch {

hosts => ["172.16.1.90:9200"]

index => "logstash-system_log-%{+yyyy.MM.dd}"

}

}

if [type] == "system_last_log" {

elasticsearch {

hosts => ["172.16.1.90:9200"]

index => "logstash-system_last_log-%{+yyyy.MM.dd}"

}

}

}

1)参数说明:

A、input file插件:

a、path:

代表日志文件路径;

b、type:

表示自定义的日志类型;

c、start_position:

beginning、end

选择Logstash最初开始读取文件的位置,开头或结尾。默认行为将文件视为实时流,因此默认从最后开始。

如果您要导入旧数据,请将其设置为开头,但是如果记录过文件的读取信息,这个配置也就失去作用了,会从最后端读取;

d、stat_interval:

表示收集日志的间隔时间,默认是1秒,增加此间隔将减少我们进行的系统调用次数,但会增加检测新日志行的时间;

B、output elasticsearch插件:

a、host:

表示elasticsearch集群的地址,集群中任意一节点的地址都可,因为集群的数据是共享的;

b、index:

表示索引的名称;

C、具体参数可以查看官方帮助文档:https://www.elastic.co/guide/en/logstash/current/index.html

(2)验证配置文件:

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/system-log.conf -t

Configuration OK

(3)修改日志文件的权限:

chmod 755 /var/log/

chmod 644 /var/log/messages

chmod 644 /var/log/lastlog

(4)重启logstash服务:

systemctl restart logstash

tailf /var/log/logstash/logstash-plain.log

[2019-05-10T20:12:37,900][INFO ][logstash.agent] Successfully started Logstash API endpoint {:port=>9600}

(5)查看索引是否已经添加到集群:

1)在elasticsearch插件中查看:

2)在kibana中查看:

3)提示:如果没有看到内容,可能是没有日志文件中没有内容,手动在相应的日志文件中添加几条即可;

(6)在kibana中对elasticsearch集群中的当前索引进行过滤:

1)

2)

3)

4)同上理可以将logstash-system_last_log-2019.05.11索引在kibana中进行过滤;说明:elasticsearch对文本的处理较好,如果是二进制文件等特殊文件会报错;

2、收集nginx日志:

(1)nginx日志转化为json:

1)安装nginx:

yum instal nginx

2)配置文件:

vim /etc/nginx/nginx.conf #在nginx配置文件的http模块进行如下操作;

…………………………

http {

#log_format main '$remote_addr - $remote_user [$time_local] "$request" '

# '$status $body_bytes_sent "$http_referer" '

# '"$http_user_agent" "$http_x_forwarded_for"';

#access_log /var/log/nginx/access.log main;

log_format access_json '{"@timestamp":"$time_iso8601",'

'"server_addr":"$server_addr",'

'"remote_addr":"$remote_addr",'

'"body_bytes_sent":"$body_bytes_sent",'

'"request_time":"$request_time",'

'"upstream_response_time":"$upstream_response_time",'

'"upstream_addr":"$upstream_addr",'

'"uri":"$uri",'

'"http_referer":"$http_referer",'

'"http_user_agent":"$http_user_agent",'

'"http_x_forwarded_for":"$http_x_forwarded_for",'

'"remote_user":"$remote_user",'

'"request":"$request",'

'"status":"$status"}';

access_log /var/log/nginx/access.log access_json;

…………………………

}

3)检查配置文件:

nginx -t

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok

nginx: configuration file /etc/nginx/nginx.conf test is successful

4)启动nginx:

systemctl start nginx.service

5)访问nginx:

yum install httpd-tools -y

ab -n10000 -c100 http://172.16.1.91/index.html

6)验证日志:

tail -1 /var/log/nginx/access.log

{"@timestamp":"2019-05-11T01:21:06+08:00","server_addr":"172.16.1.91","remote_addr":"172.16.1.254",
"body_bytes_sent":0,"request_time":0.000,"upstream_response_time":"-","upstream_addr":"-","uri":"/index.html",
"http_referer":"-","http_user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36",
"http_x_forwarded_for": "-","remote_user": "-","request": "GET / HTTP/1.1","status":"304"}

(2)配置logstash:

1)配置日志收集文件:

vim nginx-access-log.conf

input{

file {

path => "/var/log/nginx/access.log"

type => "nginx_access_log"

start_position => "beginning"

stat_interval => "2"

}

}


output{

if [type] == "nginx_access_log" {

elasticsearch {

hosts => ["172.16.1.90:9200"]

index => "logstash-nginx_access_log-%{+YYYY.MM.dd}"

}

}

}

2)验证配置文件:

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginx-access-log.conf -t

Configuration OK

3)更改日志权限:

chmod 755 /var/log/nginx/

chmod 644 /var/log/nginx/access.log

4)重启logstash服务:

systemctl restart logstash

tailf /var/log/logstash/logstash-plain.log

[2019-05-11T09:32:01,033][INFO ][logstash.agent] Successfully started Logstash API endpoint {:port=>9600}

5)访问nginx:

ab -n10000 -c100 http://172.16.1.91/index.html

6)查看日志索引是否添加成功:

(3)在kibana中对elasticsearch集群中的当前索引进行过滤:

1)

2)

3)

3、收集tomcat日志:

(1)tomcat日志转json格式:

1)安装tomcat:

yum installl tomcat

2)配置主配置文件:

vim /etc/tomcat/server.xml #修改日志配置如下;

…………………

<!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"

prefix="localhost_access_log." suffix=".txt"

pattern="%h %l %u %t &quot;%r&quot; %s %b" />

-->

<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"

prefix="tomcat_access_log" suffix=".log"

pattern="{&quot;clientip&quot;:&quot;%h&quot;,&quot;ClientUser&quot;:&quot;%l&quot;,&quot;authenticated&quot;:&quot;%u&quot;,&quot;AccessTime&quot;:&quot;%t&quot;,&quot;method&quot;:&quot;%r&quot;,&quot;status&quot;:&quot;%s&quot;,&quot;SendBytes&quot;:&quot;%b&quot;,&quot;Query?string&quot;:&quot;%q&quot;,&quot;partner&quot;:&quot;%{Referer}i&quot;,&quot;AgentVersion&quot;:&quot;%{User-Agent}i&quot;}"/>

</Host>

…………………

3)在站点添加index.html测试文件:

mkdir -p /usr/share/tomcat/webapps/test/

echo "tomcat" >/usr/share/tomcat/webapps/test/index.html

4)启动tomcat:

systemctl start tomcat

5)访问tomcat服务:

ab -n1000 -c100 http://172.16.1.91:8080/test/

tail -1 /var/log/tomcat/tomcat_access_log2019-05-11.log

{"clientip":"172.16.1.91","ClientUser":"-","authenticated":"-","AccessTime":"[11/May/2019:21:37:46 +0800]","method":"GET /test/ HTTP/1.0","status":"200","SendBytes":"7","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"}

6)验证json格式:

(2)配置logstash:

1)配置日志收集文件:

vim /etc/logstash/conf.d/tomcat-access-log.conf

input{

file {

path => "/var/log/tomcat/tomcat_access_log*.log"

type => "tomcat_access_log"

start_position => "beginning"

stat_interval => "2"

}

}


output{

if [type] == "tomcat_access_log" {

elasticsearch {

hosts => ["172.16.1.90:9200"]

index => "logstash-tomcat_access_log-%{+YYYY.MM.dd}"

}

}

}

2)验证配置文件语法:

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/tomcat-access-log.conf -t

Configuration OK

3)修改权限:

chmod 755 /var/log/tomcat/

tomcat日志文件默认是644,所以不用进行修改;

4)重启logstash:

systemctl restart logstash.service

tailf -1 /var/log/logstash/logstash-plain.log

[2019-05-11T23:18:24,701][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

5)访问tomcat站点:

ab -n10000 -c1000 http://172.16.1.91:8080/test/index.html

6)查看索引是否被添加到elasticsearch集群:

(3)在kibana中对elasticsearch集群中的当前索引进行过滤:

1)

2)

3)

4、收集java日志:

(0)日志内容和使用语法:

1)日志示例:

tail -3 /data/logs/elk-cluster.log

[2019-05-11T20:07:17,215][INFO ][o.e.n.Node ] [elk-node2] started

[2019-05-11T20:08:09,702][INFO ][o.e.m.j.JvmGcMonitorService] [elk-node2] [gc][young][62][20] duration [763ms], collections [1]/[1.6s], total [763ms]/[2.9s], memory [225.8mb]->[147.8mb]/[1.

9gb], all_pools {[young] [84.5mb]->[4.1mb]/[133.1mb]}{[survivor] [5.2mb]->[6.8mb]/[16.6mb]}{[old] [136mb]->[136.8mb]/[1.8gb]}[2019-05-11T20:08:09,704][INFO ][o.e.m.j.JvmGcMonitorService] [elk-node2] [gc][62] overhead, spent [763ms] collecting in the last [1.6s]

提示:默认情况下一条日志就是一行,如果有特殊情况需要合并日志的需要使用该语法;

2)日志合并语法:

input {

stdin {

codec => multiline { #使用multiline插件

pattern => "pattern, a regexp" #正则匹配

negate => "true" or "false" #匹配是否成功

what => "previous" or "next" #和上面的还是和下面的内容合并

}

}

(1)配置logstash日志收集文件:

vim /etc/logstash/conf.d/elasticsearch-access-log.conf

input {

file {

path => "/data/logs/elk-cluster.log"

type => "logstash-elasticsearch_access_log"

start_position => "beginning"

stat_interval => "2"

codec => multiline {

pattern => "^\["

negate => "true"

what => "previous"

}

}

}


output {

if [type] == "logstash-elasticsearch_access_log" {

elasticsearch {

hosts => ["172.16.1.90:9200"]

index => "logstash-elasticsearch_access_log-%{+YYYY.MM.dd}"

}

}

}

(2)检查配置文件:

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/elasticsearch-access-log.conf -t

Configuration OK

(3)修改日志权限:

chmod 755 /data/logs/

chmod 644 /data/logs/elk-cluster.log

(4)重启logstash服务:

systemctl restart logstash

tailf -1 /var/log/logstash/logstash-plain.log

[2019-05-12T00:25:43,814][INFO ][logstash.agent] Successfully started Logstash API endpoint {:port=>9600}

(5)查看索引是否被添加到elasticsearch集群:

(6)在kibana中对elasticsearch集群中的当前索引进行过滤:

1)

2)

3)

5、收集tcp网络日志:

tcp模块的使用场景如下: 有一台服务器A只需要收集一个日志,那么我们就可以不需要在这服务器上安装logstash,我们

通过在其他logstash上启用tcp模块,监听某个端口,然后我们在这个服务器A把日志通过nc发送到logstash上即可。

(1)配置logstash日志收集文件:

vim /etc/logstash/conf.d/tcp-log.conf

input {

tcp{

port => "5600"

mode => "server"

type => "tcp_5600"

}

}


output {

if [type] == "tcp_5600" {

elasticsearch {

hosts => ["172.16.1.90:9200"]

index => "logstash-tcp_5600-%{+YYYY.MM.dd}"

}

}

}

(2)检查配置文件:

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/tcp-log.conf -t

Configuration OK

(3)重启logstash服务:

systemctl restart logstash

tail -1 /var/log/logstash/logstash-plain.log

[2019-05-12T12:05:25,367][INFO ][logstash.agent] Successfully started Logstash API endpoint {:port=>9600}

netstat -tunlp | grep 5600

tcp6 0 0 :::5600 :::* LISTEN 4120/java

(4)通过tcp发送测试数据:

[root@controller-node1 ~]# chmod 755 /var/log/

[root@controller-node1 ~]# chmod 644 /var/log/boot.log

[root@controller-node1 ~]# nc 172.16.1.91 5600 </var/log/boot.log

(5)查看索引是否被添加到elasticsearch集群:

1)

2)

(6)在kibana中对elasticsearch集群中的当前索引进行过滤:

1)

2)

3)

12.9、小结:

1、数据在集群中是分片、副本(主从备份共享)的方式进行存储;kibana和elasticsearch-head都是查看、过滤集群数据的网页浏览

软件,可以安装在集群中任何一台服务器上,可以访问集群中任意一台上的数据,但是需要开启elasticsearch的跨域访问功能;

2、在不重启logsta服务的情况下测试日志收集功能:

vim /etc/logstash/conf.d/nginx-access-log.conf

input{

file {

path => "/var/log/nginx/access.log"

type => "nginx_access_log"

start_position => "beginning"

stat_interval => "2"

}

}


output{

if [type] == "nginx_access_log" {

stdout {

codec => rubydebug

}

file {

path => "/tmp/logstash-nginx_access_log-%{+YYYY.MM.dd}.log"

}

}

}

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginx-access-log.conf

猜你喜欢

转载自www.cnblogs.com/LiuChang-blog/p/12321294.html