ELK log analysis system deployment

Explain the principles of

ELK actually three technology

Elasticsearch: equal a log of search engines, is responsible for the index logs, storage

Logstash: for collecting logs, data processing can, and outputs the Elasticsearch

Kibana: Elasticsearch to the data output, Kibana, Kibana responsible for log data into a graphical interface and displayed via a web page

Case implementation

lab environment

Node 1 192.168.100.102 hostname: node1 service: Elasticsearch, logstash, kibana

Node 2 192.168.100.103 hostname: node2 Service: Elasticsearch

apache server 192.168.100.104 hostname: free service: logstash, apache

Systems are centos7, vm network vm1

node1 and node2 memory should be greater than 3G gave

Packages need, please prepare

Preparing the Environment

1) Configure the host name in two ELK nodes, and configure DNS

In entering 100.102
[CentOS7-02 the root @ ~] # Vim / etc / hostname (delete original) [the root CentOS7-02 @ ~] # reboot [the root @ node1 ~] # Vim / etc / the hosts (added as below)
node1


192.168.100.102 node1
192.168.100.103 node2

Then 100.103node2 in operation, in addition to the host name hostname node2outside, as in the hosts as 100.102

End change the host name, be sure to reboot restart, then do not change apache 100.1004

2) Check the node1, node2 JAVA environment

node1 100.102, the disc loading system, arranged yum
[the root @ node1 ~] # yum the install Java -Y *
[the root @ node1 ~] # Java -version (below appears on the right)
openjdk version "1.8.0_131"
OpenJDK Runtime Environment (build 1.8.0_131-b12)
OpenJDK 64-Bit Server VM (build 25.131-b12, mixed mode)

Node2 100.103 consistent in operation,

Deployment Elasticsearch

In node1, and node2 nodes need to be deployed Elasticserach software, I am here to node, for example node1, node2 operations are exactly the same

Enter node1 100.102 in

1) Installation Elasticsearch

rpm package dragged, Xshell in
[the root @ node1 ~] # rpm -ivh elasticsearch-5.5.0.rpm

2) Load System Services

[root@node1 ~]# systemctl daemon-reload
[root@node1 ~]# systemctl enable elasticsearch

3) Change the Elasticsearch profile

[root @ node1 ~] # vim /etc/elasticsearch/elasticsearch.yml
(there is a comment, you can add the following to the end, you can modify the following fields below and I like the # is explain, do not hit)
cluster.name: my-application # cluster name, one will have the same node2
node.name: node1 # node name, node2 will write a node2
path.data: /data/elk_data # data storage path
path.logs: /var/log/elasticsearch # log storage path
bootstrap.memory_lock: false does not lock the memory when starting #
network.host: 0.0.0.0 # service binding address, 0.0.0.0 means all
http.port: 9200 # port
discovery.zen.ping.unicast.hosts: ["node1", "node2"] # cluster member, write here is the hostname

Save and exit

4) Create a data storage path and authorization

[root@node1 ~]# mkdir -p /data/elk_data
[root@node1 ~]# chown elasticsearch:elasticsearch /data/elk_data/

5) Start Elasticsearch and see whether a successful start

[root @ node1 ~] # systemctl Start elasticsearch.service (this service is particularly slow start, command execution is completed, need to wait for half a minute or a minute, we can see the port number)
[root @ node1 ~] # netstat -anpt | grep 9200
tcp6 0 0 :::9200 :::* LISTEN 1550/java

6) This small step 5 should also remember node2 100.104 in operation, in addition to a configuration file has changed, other operations are exactly the same
7) Check node information

Browser, http://192.168.100.102:9200/
display information node1, as
Here Insert Picture Description
the browser input http://192.168.100.102:9200/_cluster/health?pretty
review cluster status, status: green node that represents the healthy operation
Here Insert Picture Description
of course you also, node2 of view, the ip 192.168.100.103 can be replaced, the effect is almost, I do not demonstrate here

Because the above review cluster status is not friendly, you can install Elasticsearch-head plug

Installation Elasticsearch-head plug

Install the plug only on one node, in the present embodiment an example node1 100.102
Enter node1 100.102 in
1) compile and install the node (environmental package -head tool), very long time, be patient

[root@node1 ~]# tar zxf node-v8.2.1.tar.gz
[root@node1 ~]# cd node-v8.2.1
[root@node1 node-v8.2.1]# ./configure
[root@node1 node-v8.2.1]# make && make install

2) Installation PhantomJS (-head tool is a plug above)

拖入包
[root@node1 ~]# tar xvjf phantomjs-2.1.1-linux-x86_64.tar.bz2 -C /usr/local/src/
[root@node1 /]# cd /usr/local/src/
[root@node1 src]# cd phantomjs-2.1.1-linux-x86_64/bin/
[root@node1 bin]# cp phantomjs /usr/local/bin/

3) Installation Elasticsearch-head

Drag bag
[the root @ node1 ~] # the tar-zxf elasticsearch head.tar.gz
[the root @ node1 ~] # CD-head elasticsearch
[elasticsearch the root-head @ node1] # NPM the install (install dependencies, it will be much s on the right)

4) Change the main configuration file Elasticsearch

[root @ node1 ~] # vim /etc/elasticsearch/elasticsearch.yml (at the end add the following two)
http.cors.enabled: true # Enable cross-domain access support, default is false
http.cors.allow-origin: "*" # allow cross-domain access domain address
[root @ node1 ~] # systemctl restart elasticsearch

5) start the service, you need at elasticsearch-head unpacked directory, the process will read the files in that directory, otherwise it will fail to start

This plug-in is a separate service, the port number is 9100
[root @ node1 ~] # cd / root / elasticsearch-head
[root @ node1 elasticsearch-head] # npm & (& running in the background, we should not take up the command line) run start
[1] 63598 (emergence of this wait, do not do anything, one would appear as follows)
[root @ node1 elasticsearch-head] #
> [email protected] start /root/elasticsearch-head
> grunt server

Running "connect:server" (connect) task
Waiting forever...
Started connect web server on http://localhost:9100 (After this occurs, just press Enter)

[root@node1 elasticsearch-head]# netstat -anpt | grep 9100
tcp 0 0 0.0.0.0:9100 0.0.0.0:* LISTEN 63608/grunt
[root@node1 elasticsearch-head]# netstat -anpt | grep 9200 (elasticsearch服务端口)
tcp6 0 0 :::9200 :::* LISTEN 63490/java

6) through plug-ins, browser, visit http://192.168.100.102:9100/
if you are in a safe, node2 is to replace the URL ip 100.103, of course, you can also install two nodes

Results are as follows, note the following graph, we should not Rom
Here Insert Picture Description

7) insert the index, command, and then view

Node1 100.102 in operation,
[root @ node1 ~] # curl -XPUT 'localhost:9200/index-demo/test/1?pretty&pretty' -H 'Content-Type: application/json' -d '{"user":"zhangsan","mesg":"hello word"}'
(after successful execution, output will be some things on the right)

Back, just plug-in web page view, already have, remember to refresh the page
Here Insert Picture Description

Installation and use Logstash

Logstash often deployed to monitor their server logs, in this embodiment, should be deployed in a server apche 100.104,

Before the formal deployment to deploy Logstash in node1, familiar Logstash to use, requires java environment
already installed a

Enter node1 100.102 in
1) mounted on node1 Logstash

Package drag
[the root @ node1 ~] # RPM -ivh logstash-5.5.1.rpm (a bit slow, waiting)
[the root @ node1 ~] # systemctl Start logstash
[the root @ node1 ~] # LN -s / usr / share / logstash / bin / logstash / usr / local / bin /

2) test Logstash

[the root @ node1 ~] # logstash -e 'stdin INPUT {{Output} {} {}} stdout'
(see note below FIG.)
Here Insert Picture Description
[the root @ node1 ~] # logstash -e 'stdin INPUT {{Output} {} stdout {codec = >> rubydebug}} '
Note the following picture

Here Insert Picture Description
[root @ node1 ~] # back Elasticsearch WEB interface view, just index and data, as followslogstash -e 'input { stdin{} } output { elasticsearch {hosts=>["192.168.100.102:9200"] } }
Here Insert Picture Description

Here Insert Picture Description

Here Insert Picture Description

3) Logstash profile

Logstash below by modifying the configuration file for additional collection system log / var / log / messages, the output to Elasticsearch
[the root @ node1 ~] # the chmod + R & lt O / var / log / messages (for all read, permission to logstash )
[the root @ node1 ~] # LL / var / log / messages
-rw----r--. 1 root root 1003927 11月 3 17:37 /var/log/messages
[the root @ node1 ~] # Touch /etc/logstash/conf.d/system.conf
[the root @ node1 ~] # Vim / etc / logstash / the conf. d / system.conf
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => ["192.168.100.102:9200"]
index => "system-%{+YYYY.MM.dd}"
}
}

# Save and exit, the role of configuration items above in detail behind me say
[root @ node1 ~] # systemctl restart logstash

It can be seen by Elasticsearch web page,
Here Insert Picture Description

AnSo Kibana

Installation kibana in node1 100.102, of course, also be installed to the other, which can Taiwan

Package drag,

1) In the node1 installation Kibana, set the boot from Kai

[root@node1 ~]# rpm -ivh kibana-5.5.1-x86_64.rpm
[root@node1 ~]# systemctl enable kibana.service

2) Set the master profile Kibana

[root @ node1 ~] # vim /etc/kibana/kibana.yml (The following items are annotated # deleted and replaced by the following)
server.port: 5601 #kibana port
server.host: "0.0.0.0" #kibana listen address
elasticsearch.url: "http://192.168.100.102:9200" # Elasticsearch and establish a connection
kibana.index: ".kibana" # added .kibana index, in the Elasticsearch

3) Start Service

[root@node1 ~]# systemctl start kibana

4) Verify Kibana, browser access http://192.168.100.102:5601/

The first will add an index, add an index created earlier, below
Here Insert Picture Description
point, will come out later, some of the tick, and those are the default index field, there is not a show

And then view the data, you need time, time to see node1
[root @ node1 ~] # DATE
2019年 11月 03日 星期日 18:58:57 CST

View data, this default might
Here Insert Picture Description
stress that the time to see it, the following Here Insert Picture Description
data is shown below

Here Insert Picture Description

Apache server configuration and monitoring data

Just only familiar Logstash usage, here is the real monitoring

Enter 192.168.100.104 in operation

Installation Logstash, to send the collected log to Elasticsearch

1) install httpd service, and Logstash, but also java environment

Loading the optical disc, and yum configuration, the bag logastash drag
[centos7-04 the root @ ~] # yum the install the httpd -Y
[centos7-04 the root @ ~] # yum the install Java -Y *
[centos7-04 the root @ ~] # Java -version
[centos7-04 the root @ ~] # RPM -ivh logstash-5.5.1.rpm
[centos7-04 the root @ ~] # systemctl daemon-reload
[centos7-04 the root @ ~] # systemctl enable logstash

2) write Logstash profile, apache_log.conf,

[centos7-04 the root @ ~] # CD /etc/logstash/conf.d/
[@ centos7-04 the conf.d the root] # Touch apache_log.conf
[@ centos7-04 the conf.d the root] # Vim apache_log.conf
press the following figure beat me, I have to lay back, you can copy
Here Insert Picture Description
input {
file {
path => "/etc/httpd/logs/access_log"
type => "access"
start_position => "beginning"
}
file {
path => "/etc/httpd/logs/error_log"
type => "error"
start_position => "beginning"
}
}
output {
if [type] == "access" {
elasticsearch {
hosts => ["192.168.100.102:9200"]
index => "apache_access-%{+YYYY.MM.dd}"
}
}
if [type] == "error" {
elasticsearch {
hosts => ["192.168.100.102:9200"]
index => "apache_error-%{+YYYY.MM.dd}"
}
}
}

[root @ centos7-04 conf.d] # / usr / report this content share / logstash / bin / logstash -f apache_log.conf &
(though, are running in the background, or will output, after running this name, wait, it will output a bunch of things, until, after the output of the following sentence, then Enter)
logstash.agent - Successfully started Logstash API endpoint {:port=>9600}

3) access the test

Back to the browser, elast ~ -head plug-in web page, refresh the page
Here Insert Picture Description
and then return to Kibana page in just
then, adding an index

Here Insert Picture Description
Then added
Here Insert Picture Description
in the same manner, the following is added
apache_error-*

Then, start the httpd service, and access and view data chart

[root@centos7-04 ~]# systemctl restart httpd

Then browser, visit the URL http://192.168.100.104/ , and refresh a few times

Back, Kibana web page, view the data

Below
Here Insert Picture Description
to view the index just created
Here Insert Picture Description

Experiment is completed!

Published 54 original articles · won praise 57 · views 20000 +

Guess you like

Origin blog.csdn.net/weixin_45308292/article/details/102873814