ELK日志系统-安装部署

ELK+filebeat+redis架构

es+kibana:192.168.0.56

logstash:192.168.0.57

redis:192.168.0.34              缓冲

filebeat:192.168.0.32/33     收集nginx日志

nginx:192.168.0.32             nginx反向代理

一、安装

准备环境:centos6/7,其中logstash和es需要提前安装好java,全部采用yum/rpm安装,其中es需要大磁盘存放数据,数据存放路径为/var/lib/elasticsearch,可自行更改

ELK+filebeat:所有版本均为此刻最新的6.6.1,官网直接下载rpm包,然后直接yum/rpm安装,其中es/kibana/filebeat,service/systemctl命令添加到开机自启,logstash通过rc.local实现开机自启

elasticsearch安装:

下载官网rpm包直接安装,这样速度快

单点配置

cluster.name: tx-elk

node.name: node-1

path.data: /var/lib/elasticsearch

path.logs: /var/log/elasticsearch

network.host: 192.168.0.56

http.port: 9200

discovery.zen.ping.unicast.hosts: ["192.168.0.56"]

discovery.zen.minimum_master_nodes: 1

http.cors.enabled: true

http.cors.allow-origin: "*"

集群配置:修改以下三项

node.name: node-1

discovery.zen.ping.unicast.hosts: ["192.168.0.56"]

discovery.zen.minimum_master_nodes: 1

kibana安装与配置:

直接yum install kibana.xxx.rpm或者rpm -ivh kibana.xxx.rpm

修改配置:

# vim /etc/kibana/kibana.yml

server.port: 5601

server.host: "0.0.0.0"

elasticsearch.hosts: ["http://192.168.0.56:9200"] #ES集群中任一es即可

# systemctl start kibana

# systemctl enable kibana

logstash安装

直接rpm安装即可,可执行文件路径如下,添加到ENV:

/usr/share/logstash/bin/logstash

remove_field =>["message"]

https://blog.csdn.net/zhaoyangjian724/article/details/54343178

geoip

https://blog.csdn.net/wanglei_storage/article/details/82663184

grok参考

https://www.cnblogs.com/lize3379/p/6118788.html

https://blog.csdn.net/mergerly/article/details/53310806

http://grokdebug.herokuapp.com/

http://grok.ctnrs.com/

最终配置如下:目前在做了收集nginx日志,后续会做收集tomcat日志的grok匹配规则

其中nignx的日志格式:

log_format    main  '$remote_addr $remote_user [$time_local] $request '
                        '$status $body_bytes_sent $http_referer $upstream_response_time $request_time '
                        '[$http_user_agent] $http_x_forwarded_for $http_host $upstream_addr';
input {
    redis {
        host => "192.168.0.34"
        port => 6379
        password => "Tx.123456"
        db => "0"
        data_type => "list"
        key => "logstash"
    }
}

filter {
  grok {
    match => {
      "message" => "%{IP:client} %{USER:auth} \[%{HTTPDATE:timestamp}\] %{WORD:request_method} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion} %{NUMBER:status} %{NUMBER:bytes} (?:%{URI:referrer}|-) %{NUMBER:response_time} %{NUMBER:request_time} \[%{GREEDYDATA:agent}\] (?:%{IP:x_forward}|-) %{HOSTNAME:domain} (%{URIHOST:upstream_host}|-)"
    }
  }
  mutate {
    remove_field =>["message", "port"]
    #remove_field =>["port"]
  }
  geoip {
    source => "client"
    database => "/etc/logstash/GeoLite2-City_20190326/GeoLite2-City.mmdb"
    fields => ["country_name","region_name", "city_name"]
  }
}

output {
    if "tx" in [tags] {
       elasticsearch {
           hosts => ["192.168.0.56:9200"]
           index => "tx-%{+YYYY.MM.dd}"
       }
    }
    if "api" in [tags] {
       elasticsearch {
           hosts => ["192.168.0.56:9200"]
           index => "api-%{+YYYY.MM.dd}"
       }
    }
}

redis安装与配置:

版本3.2,直接yum安装,service/systemctl命令添加到开机自启,修改以下两处,让除己以外的机器访问,以及设置连接密码

#vim /etc/redis.conf

bind 0.0.0.0

requirepass 123456

使用及验证:

# redis-cli -h 192.168.0.34 -a Tx.123456

192.168.0.34>keys *

(logstash) 在filebeat中创建的索引名logstash,出现则为说明filebeat收集到的信息成功导入到了redis中

filebeat安装与配置:

直接通过rpm包安装即可,若filebeat->logstash进行测试,直接将Logstash output部分取消注释,将其他redis output 部分注释,若filebeat->redis->logstash,则不更改此配置文件

filebeat.inputs:

- type: log
  enabled: true
  paths:
    - /etc/nginx/logs/tx.xxx.com.access.log
  tags: ["tx"]

- type: log
  enabled: true
  paths:
    - /etc/nginx/logs/api.xxx.com.access.log
  tags: ["api"]


#============================= Filebeat modules ===============================
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.kibana:
#================================ Outputs =====================================
#-------------------------- redis output ------------------------------------
output.redis:
    hosts: ["192.168.0.34"]
    port: 6379
    password: "Tx.123456"
    key: "logstash"
    db: 0
    timeout: 5
    data_type: "list"


#----------------------------- Logstash output --------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["192.168.0.57:5044"]
  #index: log

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

nginx反向代理kibana

server {
        listen       80;
        server_name  log.xxx.com;
        access_log   /etc/nginx/logs/log.xxx.com.access.log main;
        proxy_pass http://192.168.0.56:5601;
        }
}

猜你喜欢

转载自blog.csdn.net/weixin_41988331/article/details/88947636