ELK log analysis system 

 

ELK log analysis system
---------------------------------------------- ----------
elasticsearch port 9200
test environment:
     IP address of the host name Remarks
192.168.200.67 elk-node1 (at least) 2G memory
192.168.200.68 elk-node2 (at least) 2G memory
192.168.200.69 apache 1G memory
- -------------------------------------------------- ---------------------
build ELK environment:
the purpose of creating multiple Elasticsearch nodes are multiple copies of the data stored in the actual production environment, the number of nodes may be more, in this experiment, and kibana elasticsearch on node1 node centralized deployment, deployment may be distributed, i.e. Logstash, elasticsearch, kibana are deployed on different servers.
Inside the company, as some broadcast companies, map companies more commonly used.
-------------------------------------------------- -----------------
environment to prepare
all the machines off the firewall security
iptables -F
0 the setenforce
systemctl STOP firewalld
[1] arranged in a two-node name resolution ELK achieved by local / etc / hosts file
--------
67 ELK-node1 host configuration
hostname Elk-node1
the bash
Vim / etc / hostname
node1-Elk
vim / etc / hosts
192.168.200.67 elik-node1
192.168.200.68 elik-node2
save and exit
---------
68 host ELK-node2 to configure
hostname node2 Elk-
bash
vim / etc / hostname
Elk-node2
vim / etc / hosts
192.168.200.67 elik-node1
192.168.200.68 elik-node2
save and exit
---------
[2] equipped with two elk-node Java environment
java -version // system comes with the environment It is also available.
[3] software installation elasticsearch
      [3.1] elasticsearch can be installed via yum, source installation, by mounting rpm package here, two elk-node node needs to install
------- 67 host operating ------
rpm -ivh elasticsearch -5.5.0.rpm
------- 68 host operating ------
RPM -ivh elasticsearch-5.5.0.rpm
----------------
       [3.2 ] service by executing the command to configure the system and set the automatic start-up, two node have to do, and change the main configuration file
------- 67 host operating ------
systemctl-daemon reload
systemctl enable elasticsearch.service
/etc/elasticsearch/elasticsearch.yml Vim
 . 17 cluster.name: My-Elk-cluster cluster # name, the name of the same will be assigned the same cluster
 23 node.name: elk-node1 # name node
 33 path.data: / data / data storage path elk_data #
 37 path.logs: / var / log / elasticsearch # log storage path
 43 bootstrap.memory_lock: lock memory when start is not false #
 55 network.host: IP address 0.0.0.0 # service bindings, 0.0.0.0 on behalf of all
 59 http.port: 9200 # specify the listener port
 68 discovery.zen.ping.unicast.hosts: [ "elk-node1 ", "elk-node2"] # cluster instance name
## note (the contents of the bottom line add node1 and node2 where different)
 http.cors.enabled: # turned to true inter-regional transmission
 http.cors.allow-origin: "* "# allow access to cross-regional domain address
to save and exit
------- 68 host operating ------
systemctl-daemon reload
systemctl enable elasticsearch.service
vim /etc/elasticsearch/elasticsearch.yml
 17 cluster.name: Cluster-Elk-My
 23 is node.name: Elk-node2
 33 is Path.Data: / Data / elk_data
 37 [path.logs: / var / log / elasticsearch
 43 is bootstrap.memory_lock: to false
 55 network.host: 0.0.0.0
 Http.port 59: 9200
 68 discovery.zen.ping.unicast.hosts: [ "Elk-node1", "Elk-node2"]
to save and exit
[4] 2 node built data storage path and authorized
------ -67 host operating ------
mkdir -p / Data / elik_data
chown elasticsearch: elasticsearch / Data / elk_data /
------- host OS 68 ------
mkdir -p / Data / elik_data
chown elasticsearch: elasticsearch / data / elk_data /
[5] 2 node node, start elasticsearch and see if successfully opened.
------- 67 host operating ------
systemctl Start elasticsearch.service
netstat -antp | grep 9200
# first start if the determining step, not under the wrong circumstances, but there is no port, wait for a while and then filter test what
netstat -antp | grep 9200
------- 68 host operating ------
systemctl Start elasticsearch.service
netstat -antp | grep 9200
elasticsearch default HTTP port of external services is 9200, the interaction between nodes is TCP port 9300
[6] through a browser to access node, the node can see the information
192.168.200.67:9200
192.168.200.68:9200
[7] View Cluster health status can be seen as a green green
http://192.168.200.67:9200/_cluster/health?pretty
http://192.168.200.68:9200/_cluster/state?pretty
[8] see above cluster status is not intuitive, installation elasticsearch-head plug to facilitate the management of cluster source installation will be very slow, here is the binary installation
   [8.1] installation elasticsearch-head plug (based on JavaScript runtime environment Chrome V8 engine) needs to be installed as a standalone service, you need a npm command 2 nodes must operate
------- 67 host operating ------
the tar-XF node v8.2.1-Linux-x64.tar.gz -C / usr / local /
LN -s / usr / local / Node-v8.2.1-x64-Linux-/ bin / Node / usr / bin / Node
LN /usr/local/node-v8.2.1-linux-x64/bin/npm -s / usr / local / bin
-v the Node   
v8.2.1 can check the version #
-v NPM
5.3.0 to view version #
------- 68 host operating ------
the tar-XF Node v8.2.1-Linux-x64.tar.gz -C / usr / local /
LN - /usr/local/node-v8.2.1-linux-x64/bin/node S / usr / bin / Node
LN /usr/local/node-v8.2.1-linux-x64/bin/npm -s / usr / local / bin
node -v   
v8.2.1 # version to view
-v NPM
5.3.0 # version to view
------------------
   [8.2] as an independent installation elasticsearch-head node and into the background,
it will start running a local web server on 9100, the port serving the-head elasticsearch
------- 67 host operating ------
-C tar XF elasticsearch-head.tar.gz / data / elk_data / # modify data in the configuration file before installing position elasticsearch
CD / data / elk_data /
chown -R & lt elasticsearch: elasticsearch elasticsearch-head /
CD / data / elk_data / elasticsearch-head /
the install NPM
# error is normal, which can be ignored
CD _SITE /
pwd
CP app.js {,.} BAK
Vim app.js
4329 this.base_uri = this.config.base_uri || this.prefs.get ( "App-base_uri ") ||" http://192.168.200.67:9200 "; ## ip change their
exit save
npm RUN Start &
systemctl Start elasticsearch.service
netstat -anpt | grep 9100
------- 68 host operating ------
the tar-head.tar.gz -C elasticsearch XF / data / elk_data / # modify data in the configuration file before installing position elasticsearch
CD / data / elk_data /
chown -R & lt elasticsearch: head-elasticsearch elasticsearch /
CD / Data / elk_data / elasticsearch-head /
NPM the install
# error is normal, which can be ignored
CD _SITE /
pwd
CP app.js {,.} BAK
app.js Vim
4329 this.base_uri = this.config.base_uri || this.prefs.get ( "App-base_uri") || "http://192.168.200.68:9200"; ## ip change their
stored exit
npm RUN Start &
systemctl Start elasticsearch.service
netstat -anpt | grep 9100
   [8.3] browser test
http://192.168.200.67:9100
http://192.168.200.68:9100
    [8.4] to insert a browser test page test index, the index for the index-demo, type of test, you can see successfully created, you can create on a single machine, generally do not work, we have to prepare, you will develop made
------- 67 host operation ------
curl -XPUT 'localhost: 9200 / index-Demo / Test /. 1 Pretty & Pretty?' -H 'the Type-the Content: file application / JSON' -d '{ "User": "zhangsan", "mesg ":" hello world "} '
    [8.5] to refresh the browser page, you can see the
http://192.168.200.67: 9100
[9] installation logstash
logstash generally need to monitor its deployment in the server logs in, in this experiment, logstash deployed on apache server, apache server for log information collected and transmitted to elasticsearch, before the formal deployment, deploy logstash on node1, logstash also need java environment.
[9.1] mounted on node1-Elk
------- 67 host operating ------
RPM -ivh logstash-5.5.1rpm
systemctl Start logstash.service
LN -s / usr / Share / logstash / bin / logstash / usr / local / bin /
[9.2] logstash command
mode logstash use of the pipeline to collect processing and output logs, somewhat similar to the linux system piping commands xxx | ccc | ddd, xxx execution finished performs ccc, then execute ddd
in logstash, including three stages of
input iNPUT
-f: logstash configuration file can be specified by the command, based on the profile configuration logstash
-e: followed by the string may be arranged as logstash (if " "as the default input stdin, stdout as output)
-t: test configuration file is correct, then exit
------- 67 host operating ------
logstash -e 'iNPUT {{}} output stdin {stdout {}} '
### warning message appears not control, # 9600 appear to indicate success, the above error message can be ignored during input
www.baidu.com
www.sina.com.cn
last CTRL + C to exit
-------- ----
logstash -e 'stdin iNPUT {{Output} {} {CODEC Student => rubydebug}}'
# 9600 indicates success occurs, performing input
www.baidu.com
www.sina.com.cn
--- 67 ---- host operating ------
logstash -e 'stdin iNPUT {{Output} {} {the hosts elasticsearch => [ "192.168.200.67:9200"]}}'
# 3 input lines inside
www. baidu.com
www.sina.com.cn
www.google.com
[10] detected in the browser
192.168.200.67:9100  
// see if there www.baidu.com
www.sina.com.cn
www.google.com access to information
[11] logstash profile uses
logstash profile consists essentially of three parts, input, output, and the user needs only to add the filter, so the standard configuration file format is as follows:
. {..} INPUT
filter ...} {
Output} {...
in each section, or you can specify multiple access method, for example, I want to specify the source of two log files, you can write:
the INPUT {
file {path => "/ var / log / messages" of the type => "syslog"}
file path = {> "/ var / log / Apache / the access.log" type => "Apache"}
}
[12] and collecting system configuration logs
will system.conf into /etc/logstash/conf.d/ directory, logstash will be loaded at startup
------- 67 host operating ------
cd /etc/logstash/conf.d
vim of system.conf which
the INPUT {
 File {
     path => "/ var / log / messages"
          of the type => "System"
     start_position => "Beginning"
  }
}
output {
 elasticsearch {
  = the hosts> [ "192.168.200.67:9200"]
  index => "System - YYYY.MM.DD% {+}"
 }
}
save and exit
systemctl the restart logstash
CD
/ usr / Share / logstash / bin / logstash -f / etc /logstash/conf.d/system.conf // load the file
[13] viewed in a browser
192.168.200.67:9100 // View index
[14] installation kibana
------- 67 host operating --- ---
RPM -ivh kibana-5.5.1-x86_64.rpm
systemctl enable kibana.service
vim /etc/kibana/kibana.yml
2 server.port: 5601
7 server.host: "0.0.0.0"
21 elasticsearch.url: "http://192.168.200.67:9200"
30 kibana.index: ".kibana"
save and exit
systemctl start kibana. service
netstat -lnpt | grep 5601
[15] accessed in a browser
192.168.200.67:9100 // View index
http://192.168.200.67:5601 // see kibana log analysis results
--------------- ----------
[16] apache access log
------- ------------- operation host 69
 Note: who is collecting logs need to install logstash
-F iptables
the setenforce 0
systemctl STOP firewalld
hostname Apache
the bash
yum the install the httpd -Y
Java -version
RPM -ivh logstash-5.5.1.rpm
systemctl enable logstash.service
systemctl Start httpd.service
CD /etc/logstash/conf.d/
Vim apache_log.conf
INPUT {
 File {
       path => "/ var / log / the httpd / access_log" # apache access log specified location
           type => "access"
       start_position => "beginning"
 }
 file {
          path => "/var/log/httpd/error_log"
          type => "error"
          start_position => "beginning"
    }
}
 output {
 if [type] == "access" {
   elasticsearch {
  hosts => ["192.168.200.67:9200"]
  index => "apache_access-%{+YYYY.MM.dd}"
        }
   }
    if [type] == "error" {
   elasticsearch {
  hosts => ["192.168.200.67:9200"]
  index => "apache_error-%{+YYYY.MM.dd}" / usr / share / logstash / bin Save to exit the       }    }
       }




##, etc. appear 9600
Access in the browser apache
192.168.200.69
will find more than a apache access log in to see 192.168.200.67:9200
add an index to the apache in kibana

------------------ completion of the experiment! ! ! ! ! ! ! ! -------------
 
 

 

Guess you like

Origin www.cnblogs.com/elin989898/p/12011113.html