Shenzhen letter lion ELK installation configuration

First, the experimental environment

Here Insert Picture Description
Here Insert Picture Description
See required software Baidu cloud disk linux-ELK
two ENVIRONMENTAL prepare
1 are arranged on both nodes ELK domain name resolution by the local hosts file
node 1 arranged
the setenforce 0
systemctl STOP firewalld
Vim / etc / hostname
node1
hostname node1
the bash
Vim / etc / hosts
add the following two lines
192.168.180.101 node1
192.168.180.102 node2
node configuration 2
the setenforce 0
systemctl STOP firewalld
Vim / etc / hostname
node2
hostname node2
the bash
Vim / etc / hosts
add the following two lines
192.168.180.101 node1
192.168.180.102 node2
2. check the java environment,
the java -version
3. configure a static IP, gateway, DNS, ensure that node can access the Internet

Third, the installation software elasticsearch
1. elasticsearch software are installed on both nodes
RPM -ivh elasticsearch-5.5.0.rpm
systemctl daemon-reload
systemctl enable elasticsearch
2. Modify master configuration file
Vim /etc/elasticsearch/elasticsearch.yml
Cluster. name: My-Elk-Cluster
node.name: node1
Path.Data: / Data / elk_data
path.logs: / var / log / elasticsearch /
bootstrap.memory_lock: to false
network.host: 0.0.0.0
http.port,: 9200
Discovery. zen.ping.unicast.hosts: [ "node1", "node2"]
3. create a data storage path and to authorize the
mkdir -p / the data / elk_data
chown elasticsearch: elasticsearch / the data / elk_data
4. start elasticsearch and see if successfully opened
systemctl elasticsearch Start
netstat -antp | grep 9200
5. Review the node information
HTTP: // node1: 9200
HTTP: // node2: 9200
HTTP: // node1: 9200 / _cluster / Health Pretty?
HTTP: // node1: 9200 / _cluster / State Pretty?
6. Node 2 Configuration substantially the same as in the step 2 node.name:node1 replaced node2

Third, the installation elasticsearch-head plug 1 on node
1. Node compiler installation, a long time consuming to install
the tar zvxf Node-v8.2.1.tar.gz -C / usr / the src
CD /usr/src/node-v8.2.1 /
./configure && && the make the make the install
2. installation PhantomJS
the tar xvjf PhantomJS-2.1.1-Linux-x86_64.tar.bz2 -C / usr / local / the src
CD /usr/local/src/phantomjs-2.1.1- the x86_64-Linux / bin /
CP PhantomJS / usr / local / bin
3. head-mounted elasticsearch
the tar-head.tar.gz -C zvxf elasticsearch / usr / local / the src
CD / usr / local / the src / elasticsearch head-
NPM the install
4. modify elasticsearch main configuration file
vim /etc/elasticsearch/elasticsearch.yml
added last
http.cors.enabled: to true
http.cors.allow-Origin: "*"
to restart the service elasticsearch
restart elasticsearch systemctl
5. Start elasticsearch-head plug-in and test
cd / usr / local / src / elasticsearch-head /
npm RUN Start &
netstat -luntp | grep 9100
netstat -luntp | grep 9200
6. browser to http: // 192.168.180.101:9100
7. The insertion index
curl -XPUT 'localhost:? 9200 / index-demo / test / 1 pretty & pretty' -H 'Content-Type: application / json' -d '{ "user": "zhangsan", "mesg": "the Hello world"} '
8. data access http://192.168.180.101:9100-- browser again - to see the index information

Fourth, the software deployment on the primary node Kibana node1
1. kibana mounted and disposed at startup
RPM. X86_64.rpm -ivh kibana-5.5.1-
systemctl enable kibana
2. Modify kibana main configuration file
vim /etc/kibana/kibana.yml
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://192.168.180.101:9200"
kibana.index: ".kibana"
3. start kibana service
systemctl start kibana

V. deployed on apache server software logstash
1. Install the software and start apache
yum the install the httpd -Y
systemctl Start the httpd
systemctl enable the httpd
the setenforce 0
systemctl STOP firewalld
systemctl disable firewalld
2. Installation logstash
Java -version
RPM -ivh logstash-5.5.1 .rpm
systemctl daemon-reload
systemctl enable logstash
3. prepared logstash profile apache_log.conf
Vim /etc/logstash/conf.d/apache_log.conf
INPUT {
file {
path => "/ etc / the httpd / logs / access_log"
type = > "Access"
start_position => "Beginning"
}
File {
path => "/ etc / the httpd / logs / the error_log"
type => "error"
start_position => “beginning”
}
}
output {
if [type] == “access” {
elasticsearch {
hosts => [“192.168.180.101:9200”]
index => “apache_access-%{+YYYY.MM.dd}”
}
}
if [type] == “error” {
elasticsearch {
hosts => [“192.168.180.101:9200”]
index => “apache_error-%{+YYYY.MM.dd}”
}
}
}
4.编写脚本
vim /elk.sh
#!/bin/bash
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/apache_log.conf

the X-/elk.sh A + chmod
. /elk.sh &
5. Log kibana, click the create index pattern

Here Insert Picture Description

Here Insert Picture Description
Here Insert Picture Description
Here Insert Picture Description

Extension
centralized log analysis platform - ELK Stack - Security Solution Pack-the X-:
http://www.jianshu.com/p/a49d93212eca
https://www.elastic.co/subscriptions
Elastic Stack Evolution:
HTTP: // 70data.net/1505.html
based kafka and elasticsearch, linkedin build real log analysis system:
http://t.cn/RYffDoE
Elastic Stack redis used as the log buffer:
http://blog.lishiming.net/?p=463
ELK + Filebeat + Kafka + ZooKeeper build massive log analysis platform:
https://www.cnblogs.com/delgyd/p/elk.html
on elk + zookeeper + kafka operation and maintenance centralized log management:
HTTPS: //www.jianshu. com / p / d65aed756587

Published 29 original articles · won praise 0 · Views 583

Guess you like

Origin blog.csdn.net/drrui520/article/details/105261670