Optimize the configuration of logstash to read logs from redis (below)

Work together to create and grow together! This is the 29th day of my participation in the "Nuggets Daily New Plan · August Update Challenge", click to view the details of the event

Optimize logstash to read redis cache log configuration

Then configure logstash to read the logs collected by filebeat from redis (above) for further optimization

1. Optimize configuration ideas

The previous logstash read the log data configuration collected by redis requires many steps, and each time a new log is added is particularly cumbersome

Add a configuration step for log collection before optimization:

​ 1. Configure what logs filebeat collects and add tags

​ 2. Configure where filebeat stores logs

​ 3. Configure where logstash reads data from

​ 4. Configure where logstash stores data

You can think about it, where logstash actually reads the data is the same, the key is where to store it, if we store all the logs in a key of redis when filebeat is configured, and store all the logs. Tag the tag. When logstash collects, it only reads the data of this key, and finally classifies and stores it according to the tag tag, which reduces 2 steps of configuration.

After optimization, add a log configuration step:

​ 1. Configure what logs to collect in filebeat

​ 2. Configure which es index library the tag is stored in in logstash

2. Optimize filebeat configuration

Each log is marked with a different tag, and all logs are stored in nginx-all-key

[root@nginx /etc/filebeat]# vim filebeat.yml 
#定义收集什么日志
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/www_access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["nginx-www"]

- type: log
  enabled: true
  paths:
    - /var/log/nginx/bbs_access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["nginx-bbs"]


#定义redis集群地址以及定义索引名
output.redis:
  hosts: ["192.168.81.220:6379"]
  key: "nginx-all-key"
  db: 0
  timeout: 5

setup.template.name: "nginx"
setup.template.pattern: "nginx-*"
setup.template.enabled: false
setup.template.overwrite: true

复制代码

3. Optimize logstash configuration

Only one key is left. After obtaining data from this key, different index libraries are created according to the different tags.

#
input {
  redis {
    host => "192.168.81.220"
    port => "6379"
    db => "0"
    key => "nginx-all-key"
    data_type => "list"
  }
}

output {
  if "nginx-www" in [tags] {
    stdout{}
    elasticsearch {
      hosts => "http://192.168.81.210:9200"
      manage_template => false
      index => "nginx-www-access-%{+yyyy.MM.dd}"
    }
  }

  if "nginx-bbs" in [tags] {
    stdout{}
    elasticsearch {
      hosts => "http://192.168.81.210:9200"
      manage_template => false
      index => "nginx-bbs-access-%{+yyyy.MM.dd}"
    }
  }

}

复制代码

4. Collect a new blog log

4.1. Configure filebeat to specify the blog log path

You only need to specify the path, no other configuration is required

[root@nginx ~]# vim /etc/filebeat/filebeat.yml 
- type: log
  enabled: true
  paths:
    - /var/log/nginx/blog_access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["nginx-blog"]

[root@nginx ~]# systemctl restart filebeat
复制代码

insert image description here

4.2. Configure logstash custom index library

Just need to add what index library to create

[root@elasticsearch ~]# vim /etc/logstash/conf.d/redis.conf 
  if "nginx-blog" in [tags] {
    stdout{}
    elasticsearch {
      hosts => "http://192.168.81.210:9200"
      manage_template => false
      index => "nginx-blog-access-%{+yyyy.MM.dd}"
    }   
  }

复制代码

insert image description here

4.3. Start logstash

[root@elasticsearch ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis.conf 
复制代码

4.4. Generate log and view redis key

1.产生日志
ab -c 100 -n 1000 http://bbs.jiangxl.com/
ab -c 100 -n 1000 http://www.jiangxl.com/
ab -c 100 -n 1000 http://blog.jiangxl.com/


2.查看redis上的key
[root@node-2 ~]# redis-cli --raw
127.0.0.1:6379> KEYS *
filebeat
nginx-all-key
127.0.0.1:6379> LLEN nginx-all-key
3000

nginx-all-key中一共有3000条数据,正好就是我们用ab产生的3000个访问
复制代码

4.5. Check whether the es index library is generated

The index libraries of the three logs are all generated

insert image description here

Guess you like

Origin juejin.im/post/7136856884874575886