EFK+redis cache to collect Apache logs


Insert picture description here
principle

Prepare three machines

Insert picture description here
Insert picture description here
Remember to turn off the firewall on all three machines

Three machines elasticsearch
做之前先同步时间
1) yum -y install ntpdate
2) ntpdate pool.ntp.org
在做elasticsearch存储
1) rpm -ivh jdk-8u131-linux-x64_.rpm
2) yum -y install elasticsearch-6.6.2.rpm
3) vim /etc/elasticsearch/elasticsearch.yml
	1. cluster.name: efk(第17行,修改的是集群名字)
	2. node.name: node-1 (第23行)
	3. network.host: 192.168.182.210(第55行,修改的自己IP)
	4. http.port: 9200 (第59行,暴露端口)
	5. discovery.zen.ping.unicast.hosts: ["192.168.182.210", "192.168.182.211"](第68行,添加集群里的所有IP)
Server install redis
1)yum -y install gcc gcc-c++
2)tar xzf redis-5.0.0.tar.gz
3)cp -r redis-5.0.0 /usr/local/redis
4)cd /usr/local/redis
5)make
6)ln -s /usr/local/redis/src/redis-server /usr/bin/redis-server
7) ln -s /usr/local/redis/src/redis-cli /usr/bin/redis-cli
8) vim /usr/local/redis/redis.conf
	1. bind 192.168.182.210(第69行,修改自己的IP)
	2. requirepass 123321(第508行,多添加一行密码)
9)redis-server /usr/local/redis/redis.conf
10) echo 511 > /proc/sys/net/core/somaxconn
11) echo "vm.overcommit_memory = 1" >> /etc/sysctl.conf
12) echo "echo never > /sys/kernel/mm/transparent_hugepage/enabled" >> /etc/rc.local
13)vim /usr/local/redis/redis.conf
	1.136行修改为yes
14)redis-server /usr/local/redis/redis.conf
Client install filebeat
#安装httpd
1) yum -y install httpd
2) systemctl start httpd
进入浏览器测试页面,多刷新页面几回
#安装filebeat收集httpd日志
1)yum -y install filebeat-6.8.1-x86_64.rpm
2)vim /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/httpd/access_log
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
  setup.ilm.enabled: false
  setup.template.name: "filebeat-httpd"
  setup.template.pattern: "filebeat-httpd-*"
setup.template.settings:
  index.number_of_shards: 3
setup.kibana:
output.redis:
  hosts: ["192.168.182.210:6379"] #redis服务器及端口
  key: "filebeat-httpd" #这里自定义key的名称,为了后期处理
  db: 1 #使用第几个库
  timeout: 5 #超时时间
  password: 123321 #redis 密码

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
测试redis缓存收集到filebeat的日志没
1) redis-cli -h 192.168.182.210
192.168.182.210:6379> auth 123321 #登录的密码
OK
192.168.182.210:6379> get * #获取全部文件
(nil)
192.168.182.210:6379> KEYS * #
(empty list or set) #看这块有没有收集到,没有收集到刷新httpd页面,如有(filebeat日志)的话可以不直线下面的
192.168.182.210:6379> SELECT 1
OK
192.168.182.210:6379[1]> KEYS *
1) "filebeat-httpd" #出现这个才算收集到httpd日志
Server install logstash
1) yum -y install logstash-6.6.0.rpm 
2)vim /etc/logstash/conf.d/httpd.conf
input {
    
    
        redis {
    
    
                data_type => "list"
                host => "192.168.182.210"
                password => "123321"
                port => "6379"
                db => "1"
                key => "filebeat-httpd"
        }
}
output {
    
    

         elasticsearch {
    
    
                hosts => ["192.168.182.210:9200"]
                index => "redis-httpdlog-%{+YYYY.MM.dd}"
         }
}
kibana exhibition
1) yum -y install kibana-6.6.2-x86_64.rpm
	1.server.port: 5601(第2行)
	2.server.host: "192.168.182.210"(第7行,修改自己本机IP)
	3.elasticsearch.hosts: ["http://192.168.182.210:9200"](第28行,修改自己的IP)

Insert picture description here
Insert picture description here
Insert picture description here

Guess you like

Origin blog.csdn.net/m0_50019871/article/details/109229502