ELK detailed installation and deployment

I. Introduction

? Log including system logs and application logs, operation and maintenance, and developers can understand the failure by the log server hardware and software information, check the application or system, understand the reasons for the failure to resolve the problem. Analysis of the log can be a clearer understanding of the status and system security status of the server, so the server can maintain stable operation.

? But the logs are often stored in their servers. If you manage dozens of servers, review the log need to login different servers in order to view the process will be very complicated and thus lead to inefficiency. Although you can use rsyslog service will log summary. But some of the statistics or log data retrieval is very troublesome, the general use grep, awk, wc, sort and statistics and other Linux commands to retrieve. If statistics retrieval of the huge number of logs, artificial efficiency is still very low.

?, Complete log data has a very important role with us through the log collect, aggregate to:

  1. Find information. By retrieving log information, look up the corresponding error, you can quickly solve the BUG.
  2. data analysis. If the log information is truncated after finishing formatting, you can log data for further analysis and statistics, can be selected headlines, hot spots, or explosion models.
  3. system maintenance. Log analysis to understand loads and operating status of the server. It can be targeted for server optimization.

Two, ELK Profile

? ELK real-time log analysis system can collect more than the perfect solution to the problem. ELK free to use as an open-source software, but also has a strong team and community to its real-time updates.

? ELK mainly by ElasticSearch, Logstash and Kibana composed of three open source tools, as well as other specialized due to the lightweight Beats data collector to collect data.

Elasticsearch :? Distributed search engine. With a highly scalable, highly reliable, easy to manage and so on. It may be used for full-text search, retrieval, and analysis of the structure, and can combine these three. Elasticsearch is based on Lucene developed in Java, is now one of the most widely used open source search engine, Wikipedia, StackOverflow, Github and so on which to build its own search engine.

In elasticsearch, the data of all nodes are equal.

Logstash :? Data collection processing engine. It supports dynamic data collected from the various data sources, and data filtering, analysis, rich, unified format and other operations, and then stored for later use.

Kibana :? Visualization platform. It can search, display the index data stored in Elasticsearch in. Use it to easily use charts, tables, maps, display and analyze data.

Filebeat : lightweight data collection engine. With respect to the system resources occupied for Logstash, Filebeat system resources occupied almost almost micro and micro. It is based on the transformation of the original Logstash-fowarder out the source code. In other words: Filebeat is the new version of Logstash-fowarder, it will be the first choice in ELK Stack Agent.

Release Notes:

Elasticsearch, Logstash, Kibana, Filebeat installed version number must all be consistent , otherwise there will be kibana can not display the web page.

ELK working demo map:

  1. Filebeat collecting logs at APP Server end
  2. Logstash processing filter log Filebeat collected over
  3. After processing Elasticsearch log storage Logstash provided for retrieval, statistical
  4. Kibana provides a web page, the visual display of data out Elasticsearch

Technical Picture

Three, ELK installation and deployment

1. First configure the JDK environment

 #自行下载jdk
 rpm -ivh  jdk-8u144-linux-x64.rpm
 #或者 yum install java-1.8.0-openjdk*
 vim /etc/profile.d/java.sh
    export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.131-11.b12.el7.x86_64
    export PATH=$JAVA_HOME/jre/bin:$PATH
 
 source /etc/profile.d/java.sh

2. Configuration ELK yum source

vim /etc/yum.repo.d/ELK.repo
[ELK]
name=ELK-Elasticstack
baseurl=https://mirrors.tuna.tsinghua.edu.cn/elasticstack/yum/elastic-6.x/
gpgcheck=0
enabled=1

#关闭selinux
setenforce 0
sed -i.bak  's@^SELINUX=\(.*\)@SELINUX=disabled@p' /etc/selinux/config

#关闭防火墙
#Centos7
systemctl disable firewalld
systemctl stop firewalld
#CentOS6
service iptables stop
service iptables disable

3. deployed elasticsearch

1. Install elasticsearch

yum install elasticsearsh

# 修改系统配置文件属性
#   vim  /etc/security/limits.conf 
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
elasticsearch soft nofile 65536
elasticsearch hard nofile 131072

Technical Picture

2. modify the configuration file

#  vim /etc/elasticsearch/elasticsearch.yml
#集群名称
cluster.name: els
#节点名称
node.name: els-1
#数据存放路径
path.data: /data/els_data
#日志存放路径
path.logs: /data/log/els
#锁定jvm.options指定的内存,不交换swap内存
bootstrap.memory_lock: true
#绑定IP地址
network.host: 172.16.1.49
#端口号
http.port: 9200


#配置集群配置,填写集群节点,会自动发现节点
#  discovery.zen.ping.unicast.hosts: ["host1", "host2"]
# 集群配置 只需要修改节点名,和绑定ip地址即可

#  vim /etc/elasticsearch/jvm.options
-Xms1g  #指定占用内存大小,两个数字要一致 都是1g
-Xmx1g

3. Create a data directory

#创建elasticsearch数据库目录,并且修改属主为elasticsearch·
mkdir /data/els_data
mkdir /data/log/els
chown -R elasticsearch.elasticsearch /data/els_data
chown -R elasticsearch.elasticsearch /data/log/els

4. Start elasticsearch

systemctl start elasticsearch
# 启动后自动关闭
#报错
[1] bootstrap checks failed
[1]: memory locking requested for elasticsearch process but memory is not locked

#将配置文件中的 bootstrap.memory_lock: true 注释掉,不使用;即可启动成功

An errorTechnical Picture

After a successful start

Access 172.16.1.49:9200

Technical Picture

Elasticsearch API

  • Cluster status: http: // 172.16.1.100:9200/_cluster/health?pretty
  • 节点状态:http:// 172.16.1.100:9200/_nodes/process?pretty
  • 分片状态:http:// 172.16.1.100:9200/_cat/shards
  • 索引分片存储信息:http:// 172.16.1.100:9200/index/_shard_stores?pretty
  • 索引状态:http:// 172.16.1.100:9200/index/_stats?pretty
  • 索引元数据:http:// 172.16.1.100:9200/index?pretty

4. 部署Kibana

Kibana是node.js 编写的,不需要java环境。直接安装即可

1. 安装Kibana

yum install kibana
# 版本号需要和Elasticsearch 相同

Technical Picture

2. 配置Kibana

vim /etc/kibana/kibana.yml
server.port: 5601
server.host: "172.16.1.50"
elasticsearch.url: "http://172.16.1.49:9200"
kibana.index: ".kibana"
logging.dest: /data/log/kibana/kibana.log # 配置kibana日志输出到哪里

# 创建日志目录文件
mkdir -p /data/log/kibana/
touch /data/log/kibana/kibana.log
chmod o+rw  /data/log/kibana/kibana.log

访问172.16.1.50:5601

出现 Kibana server is not ready yet

说明kibanaElasticsearch版本号不一致

# 查看版本号
rpm -qa elasticsearch kibana

如果版本号一模一样,那就多刷新几次页面吧。。。。

启动成功后,访问页面可以看到:

Technical Picture

5. 部署Logstash

配置与Elasticsearch相同的Java环境,版本为8以上的Java环境。

1. 安装Logstash

yum install logstash-"Version"

2. 修改配置文件

http.host: "172.16.1.229"
http.port: 9600-9700

3. 配置收集nginx日志配置文件

  1. 修改nginx的日志格式
log_format main '{"@timestamp":"$time_iso8601",'
'"host":"$server_addr",'
'"clientip":"$remote_addr",'
'"request":"$request",'
'"size":$body_bytes_sent,'
'"responsetime":$request_time,'
'"upstreamtime":"$upstream_response_time",'
'"upstreamhost":"$upstream_addr",'
'"http_host":"$host",'
'"url":"$uri",'
'"referer":"$http_referer",'
'"agent":"$http_user_agent",'
'"status":"$status"}';

access_log /var/log/nginx/access_test.log  main;
  1. 配置Lostash收集日志配置文件
input {
        file {
                type =>"nginx-log"
                path => ["/var/log/nginx/access.log"]
                start_position => "beginning"
                sincedb_path => "/dev/null"
                }
}

output {
        elasticsearch {
                hosts => ["172.16.1.49:9200"]
                index => "nginx-log-%{+YYYY.MM}"
        }
}

4. 测试配置文件可用性

cd /usr/share/logstash/bin
./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf  --config.test_and_exit

OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2019-02-20T17:34:29,949][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK # 配置文件OK 可以使用
[2019-02-20T17:34:39,048][INFO ][logstash.runner          ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

WARN报错不影响运行

直接使用命令

./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf

就可以直接运行logstash

6. 部署filebeat

? Logstash的数据都是从Beats中获取,Logstash已经不需要自己去数据源中获取数据了。

? 以前我们使用的日志采集工具是logstash,但是logstash占用的资源比较大,没有beats轻量,所以官方也推荐使用beats来作为日志采集工具。而且beats可扩展,支持自定义构建。

yum install filebeat-6.6.0

1. 修改filebeat配置文件

vim /etc/filebeat/filebeat.yml

- type: log
    paths:
        - /Log_File #/var/log/messages
        
#output.elasticsearch: #注释掉输出到elasticsearch的配置 
    # hosts: ["localhost:9200"]
    
output.console: #添加输出到当前终端的配置
    enable: true

2. Test filebeat

/usr/share/filebeat/bin/filebeat -c  /etc/filebeat/filebeat.yml #运行filebeat,可以看见日志输出在当前终端

3. modify the configuration of the logs to elasticsearch

- type: log
    paths:
        - /Log_File #/var/log/messages
        
output.elasticsearch: #注释掉输出到elasticsearch的配置 
    hosts: ["172.16.1.49:9200"]

Start filebeat systemctl start filebeat

Run curl ‘172.16.1.49:9200/_cat/indices?v‘, view the log index

Technical Picture

Configuring the index Kibana

Technical Picture

Technical Picture

Technical Picture

Technical Picture

You can view the log

5. The output logs to the logstash

#------------------- Logstash output ----------------------
output.logstash:  #将输出到elasticsearch的配置注释
  # The Logstash hosts
  hosts: ["172.16.1.229:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

6. Configure logstash

vim /etc/logstash/conf.d/test.conf
input { 
        beats { 
                port =>5044
                }
}
#filter{}  #过滤器,自定义过滤日志中的一些数据
output {
        stdout {
                codec => rubydebug
        }
        elasticsearch {
                hosts => "172.16.1.49:9200"
                index => "test-log-%{+YYYY.MM.dd}"
        }
}

Technical Picture

Log index has been on Elasticsearch, on kibana you can also view the

yellow indicates that no copy of the node is available, because there is no test to build two Elasticsearch

Can monitor custom file filebeat, insert data manually.

Guess you like

Origin www.cnblogs.com/ExMan/p/11329115.html