The ten binding ELK ---- logstash filebeat log stored redis, then the dump elasticsearch logstash

Real one: filebeat collected logs to redis then by logstash dump elasticsearch host

Frames:

Preparing the environment:

A Host: elasticsearch / kibana IP Address: 192.168.7.100

B Host: logstash IP address: 192.168.7.102

C Host: filebeat / nginx IP Address: 192.168.7.103

D Host: redis IP address: 192.168.7.104

1, filebeat nginx logs to the collection system and host redis

1.1, install redis service, and modify the configuration

1, redis installation service

# yum install redis  -y

2, redis modify configuration files, modify listen address and password

[root@web1 ~]# vim /etc/redis.conf 
bind 0.0.0.0
requirepasswd 123456

3, start redis service

# systemctl start redis

1.2, modify filebeat host configuration, to keep the logs on the server redis

1, modified filebeat host profiles, log stored on the server redis

[root @ filebate tmp] # vim /etc/filebeat/filebeat.yml 
filebeat.inputs: 
- of the type: log 
  Enabled: to true 
  Paths: 
    - / var / log / messages 
  Fields: 
    Host: "192.168.7.103" 
    of the type: "filebeat- 7-103-syslog " 
    App:" syslog " 
- of the type: log 
  Enabled: to true 
  Paths: 
    - /var/log/nginx/access.log 
  Fields: 
    Host:" 192.168.7.103 " 
    of the type:" filebeat-nginx-accesslog-7 -103 " 
    App:" nginx " 

output.redis: 
  hosts: [" 192.168.7.104 "] # write redis server host IP address 
  port: 6379 # redis listening port number 
  password:" 123456 "# redis password 
  key:"filebeat-log-7-103 "# custom key
  db: 0 # select the default database
  timeout: 5 # long timeout, you can modify another big point

2, view of critical information filebeat

[root@filebate tmp]# grep -v "#" /etc/filebeat/filebeat.yml | grep -v "^$" 
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/messages
  fields:
    host: "192.168.7.103"
    type: "filebeat-syslog-7-103"
    app: "syslog"
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log 
  fields:
    host: "192.168.7.103"
    type: "filebeat-nginx-accesslog-7-103"
    app: "nginx"
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 3
setup.kibana:
processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
output.redis: 
  hosts: ["192.168.7.104"]
  port: 6379
  password: "123456"
  key: "filebeat-log-7-103"
  db: 0
  timeout: 5

3, start filebeat service

# systemctl restart filebeat

2, on a host test verification data redis

1, redis client login to view the data, then you can see the corresponding key value has been reached, indicating that data can reach redis server.

[root@web1 ~]# redis-cli -h 192.168.7.104
192.168.7.104:6379> auth 123456
OK
192.168.7.104:6379> KEYS *
1) "filebeat-log-7-103"
192.168.7.104:6379> 

3, collecting logs redis server in logstash

1, modify logstash configuration files, log collection redis

[@ logstash the conf.d the root] # Vim logstash-to-es.conf 
INPUT { 
   Redis { 
     Host => "192.168.7.104" # Redis host IP address 
     port => "6379" # port 
     db => "0" # and filebeat corresponding database 
     password => "123456" # password 
     data_type => "list" # log type 
     key => "filebeat-log- 7-103" # key value corresponding to the filebeat 
     CODEC => "JSON" 
   } 
} 


Output { 
  IF [Fields] [app] == "the syslog" type app # {consistent with filebeat host 
    elasticsearch { 
      the hosts => [ "192.168.7.100:9200"] # log on to the host elasticsearch 
      index => "logstash-syslog -7-103 -% {+ YYYY.MM.dd} "
    }} 

  IF [Fields] [App] == "Nginx" App Type # {coincides with filebeat host 
    elasticsearch {
      hosts => ["192.168.7.100:9200"]
      index => "logstash-nginx-accesslog-7-103-%{+YYYY.MM.dd}"
    }}
}

Check grammar as if there is a problem, if there is no problem to start the service

[root@logstash conf.d]# logstash -f  logstash-to-es.conf  -t
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2020-03-16 10:05:05.487 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK  # 检查语法正确
[INFO ] 2020-03-16 10:05:16.597 [LogStash::Runner] runner - Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

2, restart the server logstash

# systemctl restart logstash

3, the log name to view the collection of plug-in head, then you can see the log information extracted

4. Create an index on kibana page

1, Nginx logs to create an index on kibana page, the same way the system log also can be created

 2, view the extracted data in the log discover nginx

3, view collected system log

Combat two: logstash combined filebeat redis collected log and dump elasticsearch host

Frames:

Preparing the environment:

There is not much to test the host, you are tested in stand-alone form, the production environment can be deployed by the above

A Host: elasticsearch / kibana IP Address: 192.168.7.100

Host B: logstash-A IP address: 192.168.7.102

C Host: filebeat / nginx IP Address: 192.168.7.103

D Host: redis IP address: 192.168.7.104

E Host: logstash-B IP Address: 192.168.7.101

1, install and configure the host filebeat

1, the installation package filebeat needed here on the official website download package

[root@filebeat-1 ~]# yum install filebeat-6.8.1-x86_64.rpm -y

2, modify the configuration file filebeat, filebeat log transmitted from the host to the first logstash, if there are multiple hosts filebeat plurality logstash dump log, configuration section output.logstash can write different hosts logstash IP addresses

[root @ filebate ~] # grep -v "#" /etc/filebeat/filebeat.yml | grep -v "^ $" 
filebeat.inputs: 
- of the type: log 
  Enabled: to true 
  Paths: 
    - / var / log / messages 
  Fields : 
    Host: "192.168.7.103" 
    of the type: "filebeat-syslog-7-103" 
    App: "syslog" 
- of the type: log 
  Enabled: to true 
  Paths: 
    - /var/log/nginx/access.log 
  Fields: 
    Host: "192.168 .7.103 "# specify the local IP address 
    of the type:" filebeat-nginx-accesslog-7-103 " 
    App:" nginx " 
output.logstash: 
  hosts: ["192.168.7.101:5044"] # 写到指定的logstash服务器上,If there are multiple different host transfer filebeat logstash host, can be written in another address of another IP host filebeat host logstash
  Enabled: whether to true # logstash passed to the server is enabled by default 
  work: 1 # work thread 
  compression_level: 3 # compression level

3, restart the service filebeat

# systemctl restart filebeat

2, the host modifying logstash-B, the log stored on the server redis  

1, to create a stored log redis profile at /etc/logstash/conf.d/ directory, if there are multiple filebeat, logstash and redis, can be separately stored redis host logs, reducing the pressure logstash

[@ logstash the root-the conf.d. 1] # CAT filebeat-to-logstash.conf 
INPUT { 
  Beats { 
    Host => "192.168.7.101" # logstash host's IP address, if there are other hosts dump redis host logstash , can be written on another logstash host IP address corresponding to the machine, the pressure balancing logstash host 
    port => 5044 # port number 
    CODEC => "JSON" 
  } 
} 


Output { 
  IF [Fields] [App] == " the syslog "{ 
  redis { 
       Host =>" 192.168.7.104 "# redis server address stored in the 
       Port =>" 6379 " 
       DB =>" 0 " 
       data_type =>" List " 
       password =>" 123456 " 
       Key =>" the syslog-filebeat -7-103 "log # define different key, easily distinguish 
       codec =>" json "
  }}

  if [fields][app] == "nginx" {
  redis {
       host => "192.168.7.104"
       port => "6379"
       db => "0"
       data_type => "list"
       password => "123456"
       key =>  "filebeat-nginx-log-7-103" # 定义不同的key,方便分析
       codec => "json"
  }
}
}

2, logstash host test

[root@logstash-1 conf.d]# logstash -f filebeat-to-logstash.conf  -t
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2020-03-16 11:23:31.687 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK  # 测试配置文件正常

Restart logstash Service

# systemctl  restart logstash

3, this time can be seen on the two key values ​​redis host, the host has been described logstash log saved to the host redis

[root@web1 ~]# redis-cli -h 192.168.7.104
192.168.7.104:6379> auth 123456
OK
192.168.7.104:6379> KEYS *
1) "filebeat-nginx-log-7-103"
2) "filebeat-syslog-7-103"

3, arranged on the log extracting redis logstash-A host and host dump on elasticsearch

1, redis create a log extract a host in the host /etc/logstash/conf.d directory logstash

[logstash the conf.d the root @] logstash-to-CAT # es.conf 
INPUT { 
   Redis { 
     Host => "192.168.7.104" # Redis host IP address 
     Port => "6379" 
     DB => "0" 
     password => "123456" 
     data_type => "List" 
     key => "the syslog-7-103-filebeat" # written into the corresponding key value filebeat 
     CODEC => "JSON" 
   } 
   Redis { 
     host => "192.168.7.104" # host Redis IP address 
     Port => "6379" 
     DB => "0" 
     password => "123456" 
     data_type => "List" 
     Key => "filebeat-Nginx-log-7-103"# Key value for filebeat written, 
     CODEC => "JSON" 
   } 
} 


Output {
  if [fields] [app] == "syslog" {# app corresponding to the type of host filebeat 
    elasticsearch { 
      the hosts => [ "192.168.7.100:9200"] # elasticsearch host IP address 
      index => "logstash-syslog- 7-103 - YYYY.MM.DD% {+} " 
    }} 

  IF [Fields] [app] ==" Nginx "{# app corresponding to the type of host filebeat 
    elasticsearch { 
      the hosts => [" 192.168.7.100:9200 "] 
      index => "logstash-Nginx-the accesslog-7-103 - YYYY.MM.DD% {+}" 
    }} 
}

2, the test logstash host configuration file

[root@logstash conf.d]# logstash -f logstash-to-es.conf  -t
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2020-03-16 11:31:30.943 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK

3, restart logstash hosting services

# systemctl restart logstash

4. Check the system log and get nginx log plug-in head

4, creating an index kibana, view the log information collected

1, Nginx create an index, the index system log empathy

2, view the creation of the index information

 

 

 

 

  

 

  

 

  

  

  

 

Guess you like

Origin www.cnblogs.com/struggle-1216/p/12502928.html