INX elk + + + grafana redis

Structure is simple to explain: nginx + Logstash + redis single + Logstash + Elasticsearch cluster + grafana + kibana (there is no use, but with grafana show)

One. nginx log format

 

log_format main  '$remote_addr [$time_local] "$request" '
                           '$request_body $status $body_bytes_sent "$http_referer" "$http_user_agent" '
            '$request_time $upstream_response_time';

View Log:

172.16.16.132 [21/Jul/2019:09:56:12 -0400] "GET /favicon.ico HTTP/1.1" - 404 555 "http://172.16.16.74/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36" 0.000 -

two. elstaticsearch Cluster Setup

Be careful not to install as root but also to adjust some parameters, you can view specific error log, then Baidu.

Only you need to modify the following parameters, because I'm a machine to open two elasticsearch, so "172.16.16.80:9300","172.16.16.80:9301" Note that the third machine port is 9201

Three elastaicsearch attention modify node.name.

cluster.name: my-application

node.name: node-1

network.host: 0.0.0.0

http.port: 9200

discovery.seed_hosts: ["172.16.16.74:9300", "172.16.16.80:9300","172.16.16.80:9301"]

cluster.initial_master_nodes: ["node-1"]

If not, node delete data files and then restart, then that is based on the specific error, to investigate. Will be reported in general is run by the root user, or to modify certain parameters

 

 

 

 

 

 

How to detect clusters:

Google has a browser plug-elasticsearch-head

Originally master is node-1, followed by test cluster so now shot like this.

 

 

three. logstash collected logs to keep redis

1. unzip logstash conf directory to build logstash_in.conf

input {
file {
type => "nginx"
path => "/usr/local/nginx/logs/host.access.log"######nginx的Log文件
start_position => "beginning"
}
}

 

output {
stdout { codec => rubydebug }#############打印显示
if [type] == "nginx" {
redis {
host => "172.16.16.74"
data_type => "list"
key => "logstash-service_name"
}
}
}

2.nohup ./logstash  -f  ../conf/logstash_in.conf

3.tail -f nohup.out 检查有没有报错

四。logstash抓取redis日志到elasticsearch

1.这里用的gork正则来解析nginx的日志,并且解决url带中文的问题,这个正则有点麻烦

这个网站可以帮忙调正则  http://grokdebug.herokuapp.com/

 

 

 

input {
redis {
host => "172.16.16.74"
type => "nginx"
data_type => "list"
key => "logstash-service_name"
}
}


filter {

grok { match => { "message" => "%{IPORHOST:clientip} \[%{HTTPDATE:time}\] \"%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}\" \- %{NUMBER:http_status_code} %{NUMBER:bytes} \"(?<http_referer>\S+)\" \"(?<http_user_agent>(\S+\s+)*\S+)\".* %{BASE16FLOAT:request_time}"
}}
urldecode {
all_fields => true
}
date {
match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ]
}

}

 


output {

stdout { codec => rubydebug }
elasticsearch {
hosts => ["172.16.16.74:9200","172.16.16.80:9200","172.16.16.80:9201"]
index => "logstash--%{+YYYY.MM.dd}"}
}

 

 

 

可以查看 nohup.out文件,发现gork解析了

 

 五。grafana展示

1.首先配置数据源


这里报这个错误是因为要选择对应的elastaticsearch版本      这里选择2点多的版本所以报错了,No date field named @timestamp found

 

 

 

 

Guess you like

Origin www.cnblogs.com/lc226/p/11223099.html