The first part of the elk learning process

我们遇到的问题是日志太多,如果使用关键字报警,日志里的error,trace, warning等关键字那更是说不过来。所以我们一天的日志报警量是2千多条,于是就想解决这个问题,发现elk可以很好的处理日志,还能分析日志,对收集的日志,后面进一步处理也是很好的。所以就选择试用elk
知道用什么了,那接下来就是部署,试用了。

first prepare

使用的版本, 下载地址:https://www.elastic.co/downloads
project version form
elasticsearch 6.2.4 rpm
kibana 6.2.4 rpm
filebeat 6.2.4 rpm

system preparation

CPU name system install service
host1 centos7 elasticsearch + kibana
host2 centos7 elasticsearch
host3 centos7 filebeat

Install elasticsearch

1. Download the elasticsearch rpm package and install (host1+host2)

yum install elasticsearch

2.配置elasticsearch(host1+host2)
vim /etc/elasticsearch/elasticsearch.yml

cluster.name:es
node.name:host1#或者host2
network.host:0.0.0.0
http.port:9200
discovery.zen.ping.unicast.hosts:['host1','host2']

2. Install x-pack, do security authentication, and manage permissions (host1+host2)

/usr/share/elasticsearch/bin/elasticsearch-plugininstallx-pack#安装x-pack
systemctl start elasticsearch#检查running
/usr/share/elasticsearch/bin/x-pack/setup-passwordsinteractive#使用默认回车,输入认证密码比如:elk

3. Check the cluster status

curl http://host1:9200/_cluster/health#提示需要账户密码,输入上一步设置的账户密码

Return result:

{"cluster_name":"es","status":"green","timed_out":false,"number_of_nodes":2,"number_of_data_nodes":2,"active_primary_shards":17,"active_shards":34
    ,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0
    ,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":100.0}

Kibana

1. Download the kibana rpm package and install (host1)

yum install kibana

2. Install x-pack

/usr/share/kibana/bin/kibana-plugininstallx-pack

3. Placement kibana (host1)
vim /etc/kibana/kibana.yml

elasticsearch.username:"elk"#输入在安装elasticsearch的时候设置的账户密码
elasticsearch.password:"elk"
elasticsearch.url:"http://localhost:9200"#设置elasticsearch的地址

4. Dynamic kibana

systemctl start kibana

5. Set up
the proxy because the default listening port of kibana is 127.0.0.1. We can use nginx as a proxy, not to mention installing nginx, let’s talk about the
new nginx configuration file /etc/nginx/conf.d/es.conf

server{
listen*:1234;
server_namehost1;
access_log/var/log/nginx/es_access.log;
error_log/var/log/nginx/es_error.log;
location/{
proxy_passhttp://127.0.0.1:5601;
}
}

6. Verify that the service is normal
Enter in the browser: curl http://host1:1234 #Prompt to enter the account password, enter the kibana account password set when installing elasticsearch, and return to normal cleansing
The first part of the elk learning process

Install filebeat

1. Download the fileat rpm package and install host3

yum install filebeat

2. Configure filebeat
vim /etc/filebeat/filebeat.yml

filebeat.prospectors:#设置日志收集
-type:log
enabled:true
paths:
-/var/log/nova/*.log
tags:["nova"]
output.elasticsearch:#设置数据吐到elasticsearch
hosts:["host1:9200"]
protocol:"http"
username:"elk"
password:"elk"

3. Start the service

systemctl start filebeat

4. The verification service
opens the kibana interface, management > index Patterns > create index Patterns > displays the file with fileat-version-time, indicating that the data has been collected in the es cluster
The first part of the elk learning process

Next articleContinue to talk about the simple use of kibana

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324967093&siteId=291194637