ELKStack of the log platform - study notes

Reference website: http://kibana.logstash.es/content/

 

1. Elasticsearch installation

1. First download the installation packages of elasticsearch, kibana, logstash, and redis:

   wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.3.tar.gz

   wget https://download.elastic.co/kibana/kibana/kibana-4.1.8-linux-x64.tar.gz

   wget http://download.redis.io/releases/redis-3.0.7.tar.gz

   wget https://download.elastic.co/logstash/logstash/logstash-1.5.5.tar.gz

 

2. Install elasticsearch:

   tar -zxvf elasticsearch-1.7.3.tar.gz

   Modify the configuration file vim config/elasticsearch.yml

   cluster.name: any-test

   node.name: "elk-node1" 

   path.logs: /usr/local/elasticsearch/logs

   Modify the kernel parameters: vm . max_map_count = 262144 (must be changed in production)  

3. Start:

    /usr/local/elasticsearch/bin/elasticsearch &

   curl 127.0.0.1:9200 to view the status information of es

4. ES service management plugin:

   wget https://github.com/elastic/elasticsearch-servicewrapper/archive/master.zip

   mv elasticsearch-servicewrapper-master/service /usr/local/elasticsearch/bin/ Put it under the bin directory of es

   /usr/local/elasticsearch/bin/service/elasticsearch install After the installation is complete, you can start es with init.d  

 

2. Use of elasticsearch


1.es management plug-in installation:

   /usr/local/elasticsearch/bin/plugin install mobz/elasticsearch-head install plugin

    http://172.16.1.210:9200/_plugin/marvel/ Access the management plug-in, because it is charged, click on the trial first

2. Insert data:

   Click Dashboards/sense in the upper right corner of the website 

   Create an index and record the ID:

POST index-demo/test
{
  "user":"wmj",
  "msg":"hello word!"
  
}
   Use get to get data:
GET index-demo/test/AVVx0dpGfWPOVuhqIoN7
GET index-demo/test/AVVx0dpGfWPOVuhqIoN7/_source

    Do a full text search:

GET index-demo/test/_search?q=hello

 

 3. Install the cluster management plugin for ES:

   /usr/local/elasticsearch/bin/plugin -i mobz/elasticsearch-head    安装插件head集群管理

   http://172.16.1.210:9200/_plugin/head/             访问集群管理插件

 4.使用url监控集群健康状态:

   curl -XGET 172.16.1.210:9200/_cluster/health?pretty  

5.不使用广播发现的需要修改下面配置:

  discovery.zen.ping.multicast.enabled: false

  discovery.zen.ping.unicast.hosts: ["host1", "host2:port"]

6.ES的中文在线使用指南:

  http://es.xiaoleilu.com/

      

三.logstash:

1.两种安装方法

   wget https://download.elastic.co/logstash/logstash/logstash-1.5.5.tar.gz    解压安装

   https://www.elastic.co/guide/en/logstash/1.5/package-repositories.html    yum 安装说明

2. 标准输出方式启动:

   ./bin/logstash -e 'input { stdin{} } output{ stdout{} }'          手动输入会屏幕输出

3.将记录输入到ES方式启动:

    ./bin/logstash -e 'input { stdin{} } output{ elasticsearch{ host =>"172.16.1.210" protocol =>"http"} }'  手动输入会记录到ES上面

4.配置文件方式启动logstash:

   vim /etc/logstash.conf :

input{
        file{
        path => "/var/log/messages"
        }
}

output{
        file{
        path => "/tmp/%{+YYYY-MM-dd}-messages.gz"
        gzip => true
        }
        elasticsearch{
        host => "172.16.1.211"
        protocol => "http"
        index => "system-mesages-%{+YYYY.MM.dd}"
        }
}
从message文件输入,一份输出到/tmp/下面,并且压缩,一份输出到ES里面。

   ./logstash -f /etc/logstash.conf                使用配置文件启动logstash

    https://www.elastic.co/guide/en/logstash/1.5/output-plugins.html         配置文件写法的官方说明文档

 

5.生成环境ELK的logstash配置,

     一。日志写入redis

    

input{
        file{
        path => "/var/log/messages"
        }
}
output{
        redis{
        data_type => "list"                   #列表形式写入
        key => "system-messages"       #key 的名称
        host => "172.16.1.211"
        port => "6379"
        db => "1"                                #生产环境每一种日志分别写入一个db
        }
}

   PS:可以连接到redis里面,输入select 1 , keys * ,  LLEN system-messages 来查看是否正常写入

 

    二。在redis服务器上面用logstash采集redis数据存到ES上面。

   

input{
        redis{
        data_type => "list"
        key => "system-messages"
        host => "172.16.1.211"
        port => "6379"
        db => "1"
}
}
output{
        elasticsearch{
        hosts => "172.16.1.210"
        protocol => "http"
        index => "system-redis-messages-%{+YYYY.MM.dd}"
}
}

 

6.生成环境让nginx生成的日志采用json方式输出,并使用logstash进行采集。

   一。配置nginx.conf文件,采用json输出日志:

 

http 配置里面:

    log_format logstash_json '{ "@timestamp": "$time_iso8601", '
                '"host": "$server_addr", '
                '"client": "$remote_addr", '
                '"size": $body_bytes_sent, '
                '"response_time": $request_time, '
                '"domain": "$host", '
                '"url": "$uri", '
                '"referer": "$http_referer", '
                '"agent": "$http_user_agent", '
                '"status":"$status"}';

    access_log  /var/log/nginx/access_json.log  logstash_json;

    二。使用AB命令制作测试数据:

       ab -n1000 -c10 http://172.16.1.210:81/

    三。配置logstash采集nginx的数据写入到redis里面。

   

input{

        file{
        path => "/var/log/nginx/access_json.log"
        codec => "json"
        }

}
output{
        redis{
        data_type => "list"
        key => "nginx-access-log"
        host => "172.16.1.211"
        port => "6379"
        db => "2"
        }

}

 四。将redis的数据通过logstash写入到es里面。

  

input{
        redis{
        data_type => "list"
        key => "nginx-access-log"
        host => "172.16.1.211"
        port => "6379"
        db => "2"
}
}
output{
        elasticsearch{
        hosts => "172.16.1.210"
        protocol => "http"
        index => "logstash-nginx-redis-messages-%{+YYYY.MM.dd}"
}
}

       ps:输出到es的时候表名称前面要有logstash,否则类型会有问题。

五。将nginx的日志使用geoip处理,加上地理位置信息。

  

filter {
    if [type] == "gigold-nginx-access-log"{
        geoip {
            source => "clientip"
            database => "/etc/logstash/GeoLiteCity.dat"
            fields => ["city_name", "country_name", "real_region_name", "ip"]
        }
    }

    if [type] == "lehome-nginx-access-log"{
        geoip {
            source => "xff"
            database => "/etc/logstash/GeoLiteCity.dat"
            fields => ["city_name", "country_name", "real_region_name", "ip"]
        }

    }

    mutate {
        convert => ["status", "integer"]
    }
}

 

.KIBANA学习:

   1.安装kibana并配置访问的es:

       tar -zxvf kibana-4.1.8-linux-x64.tar.gz

       vim config/kibana.yml:  

       elasticsearch_url: "http://172.16.1.210:9200"    只要配置这一项

  2.启动和访问kibana:

    nohup ./bin/kibana &

    http://172.16.1.210:5601       访问地址

  3.初始化配置:

    Index name or pattern:  [nginx-redis-messages-]YYYY.MM.DD

   4.kibana的搜索语法:

      status:200 OR status:404          查找status等于200或者404的

     status:[400 TO 499]          查找status等于400到499的


 

 

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326570130&siteId=291194637