ELK (ElasticSearch, Logstash, Kibana) builds a microservice real-time log analysis platform

        Logs mainly include system logs, application logs and security logs. System operation and maintenance and developers can learn server software and hardware information through logs, check errors in the configuration process and the causes of errors. Frequent analysis of the log can understand the server's load, performance security, so as to take timely measures to correct errors.
        Usually, logs are distributed and stored on different devices. If you manage hundreds or thousands of servers, you're still looking at logs using the traditional method of logging into each machine in turn. Does this feel cumbersome and inefficient. It is imperative that we use centralized log management, such as the open source syslog, to collect and aggregate logs from all servers. After centralized management of logs, log statistics and retrieval have become a more troublesome thing. Generally, we can use Linux commands such as grep, awk, and wc to achieve retrieval and statistics, but for more demanding requirements such as query, sorting, and statistics And the huge number of machines still use this method is inevitably a bit powerless.
        The open source real-time log analysis ELK platform can perfectly solve the above problems. ELK consists of three open source tools, ElasticSearch, Logstash and Kiabana.

          Official website: https://www.elastic.co/cn/

        

        · Briefly introduce the technical architecture of the service:

  • Elasticsearch is an open source distributed search engine. Its features are: distributed, zero configuration, automatic discovery, index automatic sharding, index replication mechanism, restful style interface, multiple data sources, automatic search load, etc.

  • Logstash is a completely open source tool that can collect, filter, and store your logs for later use (eg, search).

  • Kibana is also an open source and free tool, which Kibana can provide Logstash and ElasticSearch with a friendly web interface for log analysis, which can help you aggregate, analyze and search important data logs.

  • Kafka: message queue for receiving user logs

       The schematic diagram of ELK work is as follows :

       Logstash collects logs generated by AppServer and stores them in the ElasticSearch cluster, while Kibana queries data from the ES cluster to generate graphs and returns them to the Browser. 

1. ELK build system environment preparation        

 

1. Software environment

    System: CentOS 7

    ElasticSearch: 6.2.3

    Logstash: 6.2.3

    Kibana: 6.2.3

    Java: JDK8

2. ELK environment parameter configuration

        1. Set the hostname, open the file /etc/hostname, and change the content to the elk user to close the firewall (if the firewall cannot be closed for other reasons, please do not prohibit port 80):

systemctl stop firewalld.service

        2. Disable the firewall from automatically starting:

systemctl disable firewalld.service 

        3. Open the file /etc/security/limits.conf and add the following four lines:

* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096

        4. Open the file /etc/sysctl.conf and add the following line:

vm.max_map_count=655360

        5. Load the sysctl configuration and execute the command:

sysctl –p

 

# sysctl -P 报"unknown key"错误解决办法
[root@Docker ~]# sysctl -p

	net.ipv4.ip_forward = 0

	net.ipv4.conf.default.rp_filter = 1

	net.ipv4.conf.default.accept_source_route = 0

	kernel.sysrq = 0

	kernel.core_uses_pid = 1

	net.ipv4.tcp_syncookies = 1

	error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key

	error: "net.bridge.bridge-nf-call-iptables" is an unknown key

	error: "net.bridge.bridge-nf-call-arptables" is an unknown key

	kernel.msgmnb = 65536

	kernel.msgmax = 65536

	kernel.shmmax = 2576980377

	kernel.shmall = 2097152

	kernel.shmmni = 4096

	kernel.sem = 250 32000 100 128

	fs.file-max = 6815744

	fs.aio-max-nr = 1048576

	net.ipv4.ip_local_port_range = 9000 65500

	net.core.rmem_default = 4194304

	net.core.rmem_max = 4194304

	net.core.wmem_default = 1048576

	net.core.wmem_max = 1048576

解决办法:

注:此错误可以忽视,也可以使用下面命令解决。

[root@Docker ~]# modprobe bridge
[root@Docker ~]# lsmod |grep bridge
	bridge                 48077  0

	stp                     2067  1 bridge

	llc                     5352  2 bridge,stp

	[root@oracle11gr2 Packages]# sysctl -p

	net.ipv4.ip_forward = 0

	net.ipv4.conf.default.rp_filter = 1

	net.ipv4.conf.default.accept_source_route = 0

	kernel.sysrq = 0

	kernel.core_uses_pid = 1

	net.ipv4.tcp_syncookies = 1

	net.bridge.bridge-nf-call-ip6tables = 0
	net.bridge.bridge-nf-call-iptables = 0

	net.bridge.bridge-nf-call-arptables = 0

	kernel.msgmnb = 65536

	kernel.msgmax = 65536

	kernel.shmmax = 2576980377

	kernel.shmall = 2097152

	kernel.shmmni = 4096

	kernel.sem = 250 32000 100 128

	fs.file-max = 6815744

	fs.aio-max-nr = 1048576

	net.ipv4.ip_local_port_range = 9000 65500

	net.core.rmem_default = 4194304

	net.core.rmem_max = 4194304

	net.core.wmem_default = 1048576

	net.core.wmem_max = 1048576

 

        6. Restart the computer (after the modification, it must be restarted to take effect, otherwise there will be unexpected exceptions)

reboot

 

2. ElasticSearch construction

    

        Elasticsearch is a real-time distributed search analytics engine that lets you explore your data at a speed and scale never before possible. It is used for full-text search, structured search, analysis, and a combination of these three functions.

        The following are corporate use cases:

        Wikipedia uses Elasticsearch to provide full-text search with highlighted snippets, as well as search-as-you-type and did-you-mean suggestions.

        · The Guardian uses Elasticsearch to incorporate online social data into visitor logs, giving its editors real-time public feedback on new articles.

        · Stack Overflow integrates geolocation queries into full-text search, and uses a more-like-this interface to find relevant questions and answers.

        GitHub uses Elasticsearch to query 130 billion lines of code.

        Elasticsearch isn't just for giant corporations, however. It has also helped many startups, like Datadog and Klout, to prototype their ideas and turn them into scalable solutions. Elasticsearch can run on your laptop or scale to hundreds of servers to process petabytes of data.

        No single component in Elasticsearch is new or revolutionary. Full-text search has been possible for a long time, just like analytical systems and distributed databases have been around for a long time. The revolutionary result is the fusion of these separate, useful components into a single, consistent, real-time application. It has a low barrier to entry for beginners and is always there as your skills improve or your needs increase.

        1.1 Decompress ElasticSearch


tar -zxvf elasticsearch-6.2.3.tar.gz

cd elasticsearch-6.2.3/

        1.2 Install the Head plugin (Optional):

# elasticsearch5 之前的安装方法
1.https://github.com/mobz/elasticsearch-head 下载zip 解压
2.建立${ES_HOME}\plugins\head\文件夹
3.将解压后的elasticsearch-head-master文件夹下的所有文件copy到head
4.运行es
5.打开http://localhost:9200/_plugin/head/

        Because head is a web front-end plug-in for managing Elasticsearch, the plug-in will be installed and used in the form of an independent service after the ES5 version (the previous version can be installed directly in the es installation directory), because nodejs and npm need to be installed

# elasticsearch5 之后的安装方法
yum -y install nodejs npm
#如果没有安装git,还需要先安装git:

yum -y install git
#然后安装elasticsearch-head插件:

git clone https://github.com/mobz/elasticsearch-head.git
#git下载完成后,进入目录,进行操作:

cd elasticsearch-head/
npm install
#插件安装相对会慢一些,请耐心等待...

# 插件启动前,需要先对插件进行一些相关配置
# - 修改elasticsearch.yml,增加跨域的配置(需要重启es才能生效):
# 加入配置:
http.cors.enabled: true
http.cors.allow-origin: "*"

# 修改Gruntfile.js(编辑head/Gruntfile.js)文件,修改服务监听地址(增加hostname属性,将其值设置为*)
# 以下两种配置都是OK的
# 方式一:
connect: {
        hostname: '*',
        server: {
                options: {
                        port: 9100,
                        base: '.',
                        keepalive: true
                }
        }
}

# 方式二:
connect: {
        server: {
                options: {
                        hostname: '*',
                        port: 9100,
                        base: '.',
                        keepalive: true
                }
        }
}


编辑head/_site/app.js,修改head连接es的地址,将localhost修改为es的IP地址

# 原配置
this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://localhost:9200";

# 将localhost修改为ES的IP地址
this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://IP:9200";

# 启动elasticsearch-head
cd elasticsearch-head/ && ./node_modules/grunt/bin/grunt server

# 放到后台进行运行,nohup,&,screen等各种方法操作。另外,开机启动、保持持久运行等可以
# 考虑配置rc.local、supervisord等来实现。

 

        

3. Logstash build

        1. Unzip Logstash

tar -zxvf logstash-6.2.3.tar.gz

        2. Modify the configuration

        In the config folder in the logstash-6.2.3 directory, create a new configuration file logstash.conf 

 

        

# input方式一:监听8080端口作为输入
#input {
#    beats {
#        port => "8080"
#    }
#}

# input方式二:从指定文件中作为输入信息源
input {
	file {
		codec => json
		path => "/home/redhat/ZYC_PAYMENT/PaymentWebApplication/build/*.json"		
        # 改成你项目打印的json日志文件
	}
}

# 数据过滤
filter {
	grok {
		math => { "message" => "%{TIMESTAMP_ISO08601:timestamp}\s+%{LOGLEVEL:
			severity}\s+\[%{DATA:service},%{DATA:trace},%{DATA:span},%{DATA:exportable}\]\s+%{DATA:pid}---\s+
			\[%{DATA:thread}\]\s+%{DATA:class}\s+:\s+%{GREEDYDATA:rest}" }
	}
}
# 输出配置为本机的9200端口,这是ElasticSerach服务的监听端口
output {
	elasticsearch {
		hosts => "127.0.0.1:9200"	# 改成你的 Elasticsearch 地址
	}
}

 

        Start the Logstash service in the background of the bin directory:

nohup ./logstash -f ../config/logstash.conf –config.reload.automatic & 

        Check the startup log: tail -f ../logs/logstash-plain.log , the log is output normally, no error is reported and the startup is successful!

4. Kibana build

        1, pressure Kibana

tar -zxvf kibana-6.2.3-linux-x86_64.tar.gz

        2. Modify the configuration and start

        Open the Kibana configuration file kibana-6.2.3-linux-x86_64/config/kibana.yml and find the following line:

#server.host: "localhost"

        Change it to the following:

server.host: "192.168.1.81"

        In this way, other computers can use the browser to access Kibana's services;

        Enter the bin directory of Kibana: /home/redhat/ELK/kibana-6.2.3-linux-x86_64/bin

        Execute the startup command: nohup ./kibana & 

        View the startup log: tail -f nohup.out

      3. Visit https://192.168.1.81:5601 in the browser and see the following page

        

        So far, the ELK service has been started successfully.

        4, Kibana


git clone https://github.com/anbai-inc/Kibana_Hanization.git
cd Kibana_Hanization/
python main.py /home/redhat/ELK/kibana-6.2.3-linux-x86_64/
systemctl restart kibana

        The subsequent access effect is as follows:

        

 

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325054743&siteId=291194637