elasticsearch logstash kibana日志分析监控套件部署使用

ELK简介

 ​ELK似乎是当前最为流行的日志收集-存储-分析的全套解决方案.

Elasticsearch Logstash and Kibana can be used to gather and visualize the syslogs of your systems in a centralized location. Logstash is an open source tool for collecting, parsing, and storing logs for future use. Kibana 4 is a web interface that can be used to search and view the logs that Logstash has indexed. Both of these tools are based on Elasticsearch.

Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame.

logstash1.4.0 doc:http://logstash.net/docs/1.4.0/filters/grok
日志格式匹配测试页:http://grokdebug.herokuapp.com/
elk安装教程:https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-4-on-centos-7

官网:https://www.elastic.co/downloads

ELK部署(基于Centos6.4)

1. 设置机器名,客户机通过该名称访问

​ vi /etc/hosts

 

添加elkserver到第一行第一个位置

127.0.0.1   elkserver localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

 

2. 安装Java 7

sudo yum -y install java-1.7.0-openjdk

 3. 安装Elasticsearch1.4.1

(1)导入Elasticsearch公共GPG key

sudo rpm --import http://packages.elasticsearch.org/GPG-KEY-elasticsearch

 

(2)创建并编辑一个新的用于Elasticsearch 的yum repository file

sudo vi /etc/yum.repos.d/elasticsearch.repo

 添加如下内容

[elasticsearch-1.4]
name=Elasticsearch repository for 1.4.x packages
baseurl=http://packages.elasticsearch.org/elasticsearch/1.4/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1

 保存并退出

(3)安装Elasticsearch 1.4.4

sudo yum -y install elasticsearch-1.4.4

 (4)配置Elasticsearch 1.4.4

sudo vi /etc/elasticsearch/elasticsearch.yml

 找到network.host所在的行,改为如下

network.host: localhost

 设置http.port

http.port: 9200

 保存并退出elasticsearch.yml

(5)启动

sudo service elasticsearch start

 添加到启动项

sudo /sbin/chkconfig --add elasticsearch

 4. 安装Kibana4.0.1

(1)下载并解压

cd ~; wget https://download.elasticsearch.org/kibana/kibana/kibana-4.0.1-linux-x64.tar.gz

 

tar xvf kibana-*.tar.gz
 

(2)打开Kibana 配置文件并编辑

vi ~/kibana-4*/config/kibana.yml

 设置host如下

host: "localhost"

 保存并退出

(3)拷贝kibana文件到适宜的位置

sudo mkdir -p /opt/kibana

 

sudo cp -R ~/kibana-4*/* /opt/kibana/
 

(4)设置启动项

sudo vi /etc/init.d/kibana

 写入如下内容

#!/bin/bash
#
# Kibana    Init script for Kibana
#
# chkconfig: 345 99 76
# processname: kibana
#
KIBANA_EXEC="/opt/kibana/bin/kibana"
now=$(date +"%Y-%m-%d-%S")
LOG_FILE="/opt/kibana/bin/log/kibana.$now.log"
PID_FILE="/opt/kibana/bin/log/kibana.$now.pid"
RETVAL=0

start() {
    echo "Starting Kibana..."
    $KIBANA_EXEC 1>"$LOG_FILE" 2>&1 &
    echo $!> "$PID_FILE"
    echo "Kibana started with pid $!"
}

case "$1" in
    start)
    start
    ;;
    *)
        echo "Usage: $0 {start}"
        exit 0
    ;;
esac
exit $RETVAL

 执行如下命令使得启动项生效

sudo chmod 0755 /etc/init.d/kibana

 

sudo chkconfig kibana on
 

5.安装Nginx1.8.0

(1)安装EPEL

sudo yum -y install epel-release

 修改配置

sudo vi /etc/yum.repos.d/epel.repo 

 改成 baseurl #mirrorlist

(2)安装httpd-tools

sudo yum -y install httpd-tools

  (3)使用预先下载的nginx-1.8.0-1.el6.ngx.x86_64.rpm安装nginx

yum -y install /home/hailiang/nginx-1.8.0-1.el6.ngx.x86_64.rpm

 (4)使用htpasswd创建用户kibanaadmin,设置密码为hylanda

sudo htpasswd -c /etc/nginx/htpasswd.users kibanaadmin

 (5)配置kibana.conf

sudo vi /etc/nginx/conf.d/kibana.conf

  设置server_name为机器名

server_name elkserver;

 (6)启动,并添加到启动项

sudo service nginx restart

 

sudo chkconfig –levels 235 nginx on
 

6.安装Logstash

(1)为Logstash配置一个Yum repository 文件

sudo vi /etc/yum.repos.d/logstash.repo

 写入如下内容

[logstash-1.5]
name=logstash repository for 1.5.x packages
baseurl=http://packages.elasticsearch.org/logstash/1.5/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1

 保存并退出

(2)安装Logstash1.5

sudo yum -y install logstash

 (3)生成SSL Certificates

sudo vi /etc/pki/tls/openssl.cnf

 找到[ v3_ca ]段,设置subjectAltName为本机ip

​subjectAltName = IP:192.168.11.201

 保存并退出

(4)生成SSL Certificates和私钥,CN=机器名

cd /etc/pki/tls

 

openssl req -subj '/CN=elkserver/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
 

logstash-forwarder.crt将会被拷贝到所有的客户机

(5)配置Logstash

sudo vi /etc/logstash/conf.d/01-lumberjack-input.conf

 写入如下内容

input {
   lumberjack {
    port => 5000
    type => "logs"
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}

filter {
    if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
  }
}

output {
    elasticsearch {
        host => "localhost"
    }
    stdout {codec => rubydebug}
}

 7.安装Logstash Forwarder

(1)在服务机执行
拷贝cert到客户机(192.168.11.213)的tmp目录

scp /etc/pki/tls/certs/logstash-forwarder.crt [email protected]:/tmp

 开放防火墙5000端口

/sbin/iptables -I INPUT -p tcp –dport 5000 -j ACCEPT

 

/etc/rc.d/init.d/iptables save
 

(2)如下内容在客户机执行

sudo rpm --import http://packages.elasticsearch.org/GPG-KEY-elasticsearch

 

sudo vi /etc/yum.repos.d/logstash-forwarder.repo
 

 填写如下内容

[logstash-forwarder]
name=logstash-forwarder repository
baseurl=http://packages.elasticsearch.org/logstashforwarder/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1

安装Logstash Forwarder

sudo yum -y install logstash-forwarder

 拷贝证书到适宜的目录

sudo cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/

 配置logstash-forwarder

sudo vi /etc/logstash-forwarder.conf

 写入如下内容

{
  "network": {
    "servers": [ "elkserver:5000" ],
    "timeout": 15,
    "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
  },
  "files": [
   {
      "paths": [
        "/var/log/messages",
        "/var/log/secure"
      ],
      "fields": { "type": "syslog" }
    } 
  ]
}

 保存并退出

sudo service logstash-forwarder restart

 在客户机hosts文件中添加服务器机器名

192.168.11.201 elkserver

 测试是否能访问

 

8.连接到Kibana

在浏览器输入elkserver访问Kibana,在验证用户窗口输入用户名kibanaadmin和密码hylanda
首次访问会进入1-select-index.gif所示页面
创建index成功后会进入默认页面2-discover.png
在页面下方可以看到客户机提交过来的系统日志

 

9curator安装(用于删除旧indices )

yum -y install python-pip

 

pip install elasticsearch-curator

 

which curator /usr/bin/curator

 

定时删除超过19天的日志

crontab -e

 填写内容如下

20 0 * * *  /usr/bin/curator delete indices --time-unit days --older-than 19 --timestring \%Y.\%m.\%d

 

10.如果发现5000端口没有监听,手动启动

/opt/logstash/bin/logstash -f /etc/logstash/conf.d/01-lumberjack-input.conf

 

注:以上是在centos6下配置通过,如果是centos7,服务启动将使用systemctl

kibana启动项将使用systemd配置,文件/etc/systemd/system/kibana4.service

[Service]

ExecStart=/opt/kibana/bin/kibana
Restart=always
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=kibana4
User=root
Group=root
Environment=NODE_ENV=production

[Install]
WantedBy=multi-user.target

 

ELK使用(仅供参考)

1.ELK中,负责收集日志的是logstash-forwarder,主要是配置待收集日志的目录,关键配置文件logstash-forwarder.conf

如果待收集路径日志有更新,或者有新文件,logstash-forwarder会自动收集并提交到logstash。

日志类型名,这个在logstash解析那里会用到,也就是每种日志类型有自己的解析脚本。

2.logstash接收日志,每种类型按用户设置的格式解析,关键配置文件

这里的日志类型与logstash-forwarder收集的类型相对应,grok内置了好多pattern在http://grokdebug.herokuapp.com/patterns#,如果不能满足需求,可以自定义pattern文件,并在这里设置上路径,就能像内置类型一样使用了。

举个例子:%{NUMBER:total_source_count:int},会解析出当前位置的数值类型,命名为total _source_count,数据类型设置为int。

这个匹配脚本会严格按照日志中每行的字符顺序来解析

比如输入:

2015-07-20 17:52:41,052 INFO com.hylanda.statistic.mr.bean.TagStatisticResultHandler keywords_code 10000085412323 nagetive_source_count 1,neutral_source_count 0,positive_source_count 0,releaseDate 2015-06-08 05:42:31 release_date_day 2015-06-08 source_type 0 statisticTag 节目/负面/节目不好看 taskId 6572 total_feedback_count 0,total_repeat_count 0,total_source_count 1,total_weibo_original 0}

 

那么解析结果为

{
                  "message" => "2015-07-20 17:52:41,052 INFO com.hylanda.statistic.mr.bean.TagStatisticResultHandler keywords_code 10000085412323 nagetive_source_count 1,neutral_source_count 0,positive_source_count 0,releaseDate 2015-06-08 05:42:31 release_date_day 2015-06-08 source_type 0 statisticTag 节目/负面/节目不好看 taskId 6572 total_feedback_count 0,total_repeat_count 0,total_source_count 1,total_weibo_original 0}",
                 "@version" => "1",
               "@timestamp" => "2015-07-21T03:32:42.260Z",
                     "host" => "localhost",
                  "package" => "com.hylanda.statistic.mr.bean.TagStatisticResultHandler",
            "keywords_code" => 10000085412323,
    "nagetive_source_count" => 1,
     "neutral_source_count" => 0,
    "positive_source_count" => 0,
         "release_date_day" => "2015-06-08",
              "source_type" => 0,
                   "taskId" => "6572",
     "total_feedback_count" => 0,
       "total_repeat_count" => 0,
       "total_source_count" => 1,
     "total_weibo_original" => 0
}

 

可以看到已经按照指定的类型和名称解析出来了。

3.有了数据下一步就是展现

第一次访问Kibana要求设置index

2.设置完毕就可以从discover页查看数据了

3.注意每个图表都可以使用elasticsearch来查询,这个目前还不太会用。另外有好多内置的对JSon的支持,也需要进一步探索。

 

猜你喜欢

转载自belinda407.iteye.com/blog/2229113