A, ELK set up articles
Official website address: https: //www.elastic.co/cn/
The Definitive Guide to the official website: https: //www.elastic.co/guide/cn/elasticsearch/guide/current/index.html
Installation Guide: https: //www.elastic.co/guide/en/elasticsearch/reference/2.x/rpm.html
ELK is Elasticsearch, Logstash, Kibana short, these three are the core suite, but not all.
Elasticsearch real-time full-text search and analysis engine that provides collection, analysis, storage of data three functions; REST is an open and JAVA API and other structures to provide efficient search capabilities, scalable distributed systems. It is built on top of Apache Lucene search engine library.
Logstash is used to collect, analyze, log filtering tools. It supports almost any type of log, including system logs, error logs, and custom application log. It can receive log from many sources, these sources include the syslog, messaging (e.g. RabbitMQ) and the JMX, it is possible to output data in a variety of ways, including e-mail, and WebSockets Elasticsearch.
Kibana is a Web-based graphical interface for data relevant to the log, is stored in the analysis and visualization Elasticsearch Index. It uses Elasticsearch REST interface to retrieve the data, not only allows users to create their own customized dashboard view of data, but also allows them a special way to query and filter data
surroundings
Centos6.5 two IP: 192.168.1.224 installation: elasticsearch, logstash, Kibana, the Nginx, the Redis 192.168.1.157 installation: elasticsearch (both are closed and iptables SElinux)
installation
Yum install elasticsearch key source of (the need to configure on all servers) RPM --import https://artifacts.elastic.co/GPG-KEY-elasticsearch configuration elasticsearch the yum source, adding elasticsearch.repo file following Vim /etc/yum.repos.d/elasticsearch.repo [elasticsearch-2.x] name = elasticsearch Repository Packages for 2.x BaseURL = HTTPS: //packages.elastic.co/elasticsearch/2.x/centos =. 1 gpgcheck gpgkey = HTTPS: //packages.elastic.co/GPG-KEY-elasticsearch Enabled. 1 =
Installation elasticsearch environment
Installation elasticsearch # yum the install -Y elasticsearch mounting java environment (java environment must be at least 1.8 version) 1.tar zxvf ./jdk-8u151-linux-x64.tar.gz -C / usr / lib / JVM 2. command vim / etc / bashrc file, the file with the final surface: Export the JAVA_HOME = / usr / lib / JVM / jdk1.8.0_151 Export the JRE_HOME the JAVA_HOME = $ {} / JRE exportCLASSPATH =:. $ {the JAVA_HOME} / lib: $ { } the JRE_HOME / lib Export the PATH = $ {the JAVA_HOME} / bin: $ the PATH 3. updated environment variable Source / etc / bashrc 4. verify that the installation was successful Java -version echo the PATH $
Create a directory elasticsearch data and modify the owner is a group of the directory
# Mkdir -p / data / es- data ( data directory to store custom data) # chown -R & lt elasticsearch: elasticsearch / data / data-ES
Modified elasticsearch owner is a group of log
# chown -R elasticsearch:elasticsearch /var/log/elasticsearch/
Elasticsearch modify configuration files
Vim /etc/elasticsearch/elasticsearch.yml # # find the configuration file cluster.name, open the configuration and set the cluster name cluster.name: Oldboy # find the configuration file node.name, open the configuration and set node name node .name: ywxi. 1- # modify data stored path Path.Data: / data / data-ES # logs modified log path path.logs: / var / log / elasticsearch / # configure memory usage with swap bootstrap.memory_lock: to true # monitor network address network.host: 0.0.0.0 # listens on port opened http.port: 9200 # turn off the broadcast address spread discovery.zen.ping.multicast.enabled: false # Specifies the LAN IP, let them find a cluster discovery .zen.ping.unicast.hosts: [ "192.168.1.224", " 192.168.1.157"]
Start Service
[AL7 elasticsearch the root @] # /etc/init.d/elasticsearch Start Starting elasticsearch: the HotSpot the Java ((TM)) 64-Bit Server warning the VM: the INFO: OS :: commit_memory (0x0000000085330000, 2,060,255,232, 0) failed; error = 'Can Not the allocate Memory '(errno = 12) # # There IS Insufficient Memory for at The the Java Runtime Environment to the Continue. # Native Memory Allocation (mmap) failed The to the Map 2,060,255,232 bytes for Committing. As a Reserved Memory. # An error Report File with More Information IS saved AS : # /tmp/hs_err_pid2616.log [FAILED] this error is because the default memory size is 2G, the virtual machine is not so much space to modify parameters: vim /etc/elasticsearch/jvm.options -Xms512m -Xmx512m start again /etc/init.d/elasticsearch start to see service status, if there is an error you can see the error log less /var/log/elasticsearch/demon.log (name of the log is named after the cluster) to create a service boot from the start # chkconfig elasticsearch on
Precautions
vim /etc/security/limits.conf #开启elasticsearch用户锁住内存 # allow user 'elasticsearch' mlockall elasticsearch soft memlock unlimited elasticsearch hard memlock unlimited [root@227 elasticsearch]# /etc/init.d/elasticsearch restart Stopping elasticsearch: [FAILED] Starting elasticsearch: Exception in thread "main" java.lang.IllegalStateException: marvel plugin requires the license plugin to be installed at org.elasticsearch.marvel.license.LicenseModule.verifyLicensePlugin(LicenseModule.java:37) at org.elasticsearch.marvel.license.LicenseModule.<init>(LicenseModule.java:25) at org.elasticsearch.marvel.MarvelPlugin.nodeModules(MarvelPlugin.java:89) at org.elasticsearch.plugins.PluginsService.nodeModules(PluginsService.java:263) at org.elasticsearch.node.Node.<init>(Node.java:179) at org.elasticsearch.node.Node.<init>(Node.java:140) at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:143) at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:194) at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:286) at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:45) Refer to the log for complete error details. [FAILED] Error to solve: / usr / share / elasticsearch / bin / plugin install license # installation plug-solving
Port 9200 through a browser request, look at the success of
9200 port to check whether it [root @ AL7 elasticsearch] # netstat -tnlp | grep 9200 tcp 0 0 ::: 9200 ::: * LISTEN 4760 / the Java
How to interact and elasticsearch
JavaAPI RESTful API Javascript,.Net,PHP,Perl,Python 利用API查看状态# curl -i -XGET 'localhost:9200/_count?pretty' HTTP/1.1 200 OK content-type: application/json; charset=UTF-8 content-length: 95 { "count" : 0, "_shards" : { "total" : 0, "successful" : 0, "failed" : 0 } }
Install plug
Installation elasticsearch-head plug. 1 / usr / share / elasticsearch / bin / mobz the install plugin / elasticsearch-head # access http://192.168.1.224:9200/_plugin/head Plug monitor 2 ES / usr / share / elasticsearch / bin / plugin install lmenezes / elasticsearch-Kopf # access http://192.168.1.224:9200/_plugin/kopf browser to visit: http: //192.168.1.224: 9200 / _plugin / head / # ywxi-1 and ywxi-2 is I take the node where the other two are nodes colleagues
LogStash use
安装Logstash环境: 官方安装手册: https://www.elastic.co/guide/en/logstash/current/installing-logstash.html 下载yum源的密钥认证: # rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch #配置logstash的yum源,在logstash.repo文件中添加如下内容 vim logstash.repo [logstash-2.4] name=Logstash repository for 2.4.x packages baseurl=https://packages.elastic.co/logstash/2.4/centos gpgcheck=1 gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch enabled=1 # yum install -y logstash 查看下logstash的安装目录 # rpm -ql logstash #创建一个软连接,每次执行命令的时候不用在写安装路劲(默认安装在/usr/share下)ln -s /opt/logstash/bin/logstash /bin/ 执行logstash的命令 # logstash -e 'input { stdin { } } output { stdout {} }'运行成功以后输入: nihao stdout返回的结果:
注: -e 执行操作 input 标准输入 { input } 插件 output 标准输出 { stdout } 插件 通过rubydebug来输出下更详细的信息 # logstash -e 'input { stdin { } } output { stdout {codec => rubydebug} }' 执行成功输入: nihao stdout输出的结果:
如果标准输出还有elasticsearch中都需要保留应该怎么玩,看下面 # logstash -e 'input { stdin { } } output { elasticsearch { hosts => ["192.168.1.224:9200"] } stdout { codec => rubydebug }}' 运行成功以后输入: I am elk 返回的结果(标准输出中的结果):
logstash使用配置文件
官方指南:https://www.elastic.co/guide/en/logstash/current/configuration.html 创建配置文件01-logstash.conf # vim /etc/logstash/conf.d/elk.conf 文件中添加以下内容input { stdin { } }output { elasticsearch { hosts => ["192.168.1.224:9200"] } stdout { codec => rubydebug } } 使用配置文件运行logstash# logstash -f ./elk.conf运行成功以后输入以及标准输出结果
logstash的数据库类型
1. Input插件 权威指南:https://www.elastic.co/guide/en/logstash/current/input-plugins.html file插件的使用 # vim /etc/logstash/conf.d/elk.conf # 添加如下配置 input { file { path => "/var/log/messages" type => "system" start_position => "beginning" } } output { elasticsearch { hosts => ["192.168.1.224:9200"] index => "system-%{+YYYY.MM.dd}" } } 运行logstash指定elk.conf配置文件,进行过滤匹配#logstash -f /etc/logstash/conf.d/elk.conf
来一发配置安全日志的并且把日志的索引按类型做存放,继续编辑elk.conf文件
# vim /etc/logstash/conf.d/elk.conf添加secure日志的路径 input { file { path => "/var/log/messages" type => "system" start_position => "beginning" } file { path => "/var/log/secure" type => "secure" start_position => "beginning" } } output { if [type] == "system" { elasticsearch { hosts => ["192.168.1.224:9200"] index => "system-%{+YYYY.MM.dd}" } } if [type] == "secure" { elasticsearch { hosts => ["192.168.1.224:9200"] index => "nagios-secure-%{+YYYY.MM.dd}" } } } 运行logstash指定elk.conf配置文件,进行过滤匹配# logstash -f ./elk.conf
这些设置都没有问题之后,接下来安装下kibana,可以让在前台展示
Kibana的安装及使用
安装kibana环境 官方安装手册:https://www.elastic.co/guide/en/kibana/current/install.html 下载kibana的tar.gz的软件包 [root@al7 conf.d]# cd /usr/local/src/ [root@al7 src]# wget https://download.elastic.co/kibana/kibana/kibana-4.3.1-linux-x64.tar.gz 解压kibana的tar包 [root@al7 src]# tar zxvf kibana-4.3.1-linux-x64.tar.gz 进入解压好的kibana [root@al7 src]# mv kibana-4.3.1-linux-x64 /usr/local/ #在/usr/local创建kibana的软连接 [root@al7 src]# ln -s /usr/local/kibana-4.3.1-linux-x64/ /usr/local/kibana 编辑kibana的配置文件 [root@al7 src]# vim /usr/local/kibana/config/kibana.yml server.port: 5601 server.host: "0.0.0.0" elasticsearch.url: "http://192.168.1.224:9200" kibana.index: ".kibana" #安装screen,以便于kibana在后台运行(当然也可以不用安装,用其他方式进行后台启动) [root@al7 src]# yum -y install screen [root@al7 src]# screen /usr/local/kibana/bin/kibana Crtl a+d 退出 [root@al224 ~]# screen -ls There is a screen on: 2257.pts-0.al224 (Detached) 1 Socket in /var/run/screen/S-root. 打开浏览器并设置对应的index http://IP:5601
添加ES的索引到kibana里
二、ELK实战篇
好,现在索引也可以创建了,现在可以来输出nginx、message、secrue的日志到前台展示(Nginx有的话直接修改,没有自行安装)
编辑nginx配置文件,修改以下内容(在http模块下添加) log_format json '{"@timestamp":"$time_iso8601",' '"@version":"1",' '"client":"$remote_addr",' '"url":"$uri",' '"status":"$status",' '"domian":"$host",' '"host":"$server_addr",' '"size":"$body_bytes_sent",' '"responsetime":"$request_time",' '"referer":"$http_referer",' '"ua":"$http_user_agent"' '}'; 修改access_log的输出格式为刚才定义的json access_log logs/elk.access.log json; 编辑logstash配置文件,进行日志收集 vim /etc/logstash/conf.d/full.conf input { file { path => "/var/log/messages" type => "system" start_position => "beginning" } file { path => "/var/log/secure" type => "secure" start_position => "beginning" } file { path => "/usr/local/nginx/logs/elk.access.log" type => "nginx" start_position => "beginning" } } output { if [type] == "system" { elasticsearch { hosts => ["192.168.1.224:9200"] index => "nagios-system-%{+YYYY.MM.dd}" } } if [type] == "secure" { elasticsearch { hosts => ["192.168.1.224:9200"] index => "nagios-secure-%{+YYYY.MM.dd}" } } if [type] == "nginx" { elasticsearch { hosts => ["192.168.1.224:9200"] index => "nginx-log-%{+YYYY.MM.dd}" } } } 运行看看效果如何 logstash -f /etc/logstash/conf.d/full.conf
可以发现所有创建日志的索引都已存在,接下来就去Kibana创建日志索引,进行展示(按照上面的方法进行创建索引即可),看下展示的效果
ab压测下: ab -n 20000 -c 1000 http://192.168.1.224/
具体的日志输出需求,进行具体的分析
三:ELK终极篇
安装reids # yum install -y redis 修改redis的配置文件 # vim /etc/redis.conf 修改内容如下 daemonize yes bind 192.168.1.224启动redis服务 # /etc/init.d/redis restart 测试redis的是否启用成功 # redis-cli -h 192.168.1.224输入info如果有不报错即可 redis 192.168.1.224:6379> info
编辑配置redis-out.conf配置文件,把标准输入的数据存储到redis中 # vim /etc/logstash/conf.d/redis-out.conf 添加如下内容 input { stdin {} } output { redis { host => "192.168.1.224" port => "6379" db => '6' data_type => "list" key => 'demo' } } 运行logstash指定redis-in.conf的配置文件# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-out.conf 编辑配置redis-in.conf配置文件,把reids的存储的数据输出到elasticsearch中 # vim /etc/logstash/conf.d/redis-out.conf添加如下内容 input{ redis { host => "192.168.1.224" port => "6379" db => '6' data_type => "list" key => 'demo' } } output { elasticsearch { hosts => ['192.168.1.224:9200'] index => 'redis-test-%{+YYYY.MM.dd}' } } 运行logstash指定redis-in.conf的配置文件# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-out.conf
把之前的配置文件修改一下,变成所有的日志监控的来源文件都存放到redis中,然后通过redis在输出到elasticsearch中 更改为如下,编辑full.conf input { file { path => "/var/log/nginx/access_json.log" type => "nginx" start_position => "beginning" } file { path => "/var/log/secure" type => "secure" start_position => "beginning" } file { path => "/var/log/messages" type => "system" start_position => "beginning" } } output { if [type] == "nginx" { redis { host => "192.168.1.224" port => "6379" db => "6" data_type => "list" key => 'nginx' } } if [type] == "secure" { redis { host => "192.168.1.224" port => "6379" db => "6" data_type => "list" key => 'secure' } } if [type] == "system" { redis { host => "192.168.1.224" port => "6379" db => "6" data_type => "list" key => 'system' } } } 运行logstash指定shipper.conf的配置文件# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/full.conf在redis中 (查看是否已经将数据写到里面(有时候输入的日志文件不产生日志,会导致redis里面也没有写入日志)
注意: input是从客户端收集的 output是同样也保存到192.168.1.224中的elasticsearch中,如果要保存到当前的主机上,可以把output中的hosts修改成localhost,如果还需要在kibana中显示,需要在本机上部署kabana,为何要这样做,起到一个松耦合的目的。说白了,就是在客户端收集日志,写到服务端的redis里或是本地的redis里面,输出的时候对接ES服务器即可 运行
命令看看效果# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-out.conf
上线ELK
1. Log classification system log rsyslog logstash syslog widget access log nginx logstash codec json error log file logstash mulitline run log file logstash codec json device log syslog logstash syslog widget Debug log file logstash json or mulitline 2. log normalized path fixing format possible json3. The system log start -> error log -> run log -> access log
Because ES save the log is permanently preserved, so it is necessary to periodically delete your log, the following command to delete the log before the specified time
curl -X DELETE http://xx.xx.com:9200/logstash-*-`date +%Y-%m-%d -d "-$n days"`