ELK distributed platform, ES cluster installation and its extensions, Kibana installation and use, Logstash configuration extension

ELK overall workflow

Here Insert Picture Description
ELK is a solution that is the acronym of three software products
ELK represent
Elasticsearch: responsible for log retrieval and storage
Logstash: responsible for the collection and analysis of logs, processing
Kibana: log visualization responsible

ELK assembly operation and maintenance massive log system, can be used to solve:
a distributed query log data centralized management and
system monitoring, the monitoring system includes the hardware and the various components of the application
troubleshooting
security information and event management
reporting

ELK set up as follows:

A, (operation springboard) installed ES cluster

Operating environment, a springboard machine and five elasticsearch cluster machines
provided that has ansible, there elasticsearch.yml files, or only manually install
1. Modify / etc / hosts file so that it can ping hostname
2. Write the script ansible

[root@ecs-mao1 ~]# cat eess.yml 
---
- hosts: eess   //节点
  tasks: 
    - name: copy
      copy:
        src: /etc/hosts
        dest: /etc/hosts
        owner: root
        group: root
        mode: 0644
    - name: install
      yum:
        name: java-1.8.0-openjdk.x86_64,elasticsearch
        state: latest
    - name: es版本
      template:
        src: elasticsearch.yml
        dest: /etc/elasticsearch/elasticsearch.yml
        owner: bin
        group: wheel
        mode: 0644
      notify:
        - re 
  handlers:
    - name: re
      service:
        name: elasticsearch
        state: started
        enabled: yes

template copy text block with tags

[root@ecs-mao1 ~]# cat elasticsearch.yml  | grep hostname
node.name: {{ ansible_hostname  }}

ansible_hostname is setup inside the module, shows the hostname

[root@ecs-mao1 ~]# ansible eess -m setup | grep hostname
        "ansible_hostname": "eess-0001", 
        "ansible_hostname": "eess-0002",

3. Run the script

[root@ecs-mao1 ~]# ansible-playbook eess.yml

The results show success

4. Test results (ES cluster operation)

[root@eess-0001 ~]# curl  http://192.168.1.111:9200/_cluster/health?pretty  //任意节点都可以

"Status": "green" cluster status: normal green, yellow means there is a problem but not very serious, red indicates a serious failure
"number_of_nodes": 5, represents the number of nodes in the cluster

Second, the installation and other plug-HEAD

All plug-ins are stepping stones of ftp file inside
plugin installed on which machine, which can only be used on the machine
to install the plug

[root@ecs-mao1 ~]# cat es1.yml 
---
- hosts: eess   //节点
  tasks: 
   - name: XX
     shell: /usr/share/elasticsearch/bin/plugin install ftp://192.168.1.252/public/elasticsearch-head-master.zip
   - name: xx1
     shell: /usr/share/elasticsearch/bin/plugin install ftp://192.168.1.252/public/elasticsearch-kopf-master.zip
   - name: xx2
     shell: /usr/share/elasticsearch/bin/plugin install ftp://192.168.1.252/public/bigdesk-master.zip
[root@ecs-mao1 ~]# ansible-playbook es1.yml  //运行剧本

Verify operation

[root@eess-0001 bin]# ./plugin  list    //查看安装的插件
Installed plugins in /usr/share/elasticsearch/plugins:
    - head
    - kopf
    - bigdesk

Access to three plug-in page

[root@eess-0001 bin]$ firefox http://192.168.1.55:9200/_plugin/head
[root@eess-0001 bin]$ firefox   http://192.168.1.55:9200/_plugin/kopf
[root@eess-0001 bin]$ firefox  http://192.168.1.55:9200/_plugin/bigdesk

The establishment of command index
1. index file

[root@eess-0001 ~]# curl -X PUT "http://192.168.1.55:9200/index" -d '
> { 
>     "settings":{
>     "index":{
>     "number_of_shards":5,
>     "number_of_replicas":1
>    }
>   }
> }'

[root@eess-0001 bin]$ firefox http://192.168.1.55:9200/_plugin/head  去该网页上查看是否有索引

2. Increase Data

[root@eess-0001 ~]# curl -X PUT "http://192.168.1.111:9200/tedu/teacher/1" -d '{
> "职业":"诗人",
> "名字":"李白",
> "年代":"唐"
> }'

3. Modify the data

[root@eess-0001 ~]# curl -X PUT "http://192.168.1.111:9200/tedu/teacher/1" -d '{
"doc":{
"年代": "唐代"
}
}'

4. query data

[root@eess-0001 ~]# curl -X GET "http://192.168.1.111:9200/tedu/teacher/3?pretty"
{
  "_index" : "tedu",
  "_type" : "teacher",
  "_id" : "3",
  "found" : false
}

5. Delete Data

[root@eess-0001 ~]# curl -X DELETE "http://192.168.1.111:9200/tedu/teacher/3?pretty"
{
  "found" : false,
  "_index" : "tedu",
  "_type" : "teacher",
  "_id" : "3",
  "_version" : 1,
  "_shards" : {
    "total" : 2,
    "successful" : 2,
    "failed" : 0
  }
}

6. Delete index
delete (DELETE)
curl -XDELETE HTTP: // any cluster node: 9200 / Index Name / type / the above mentioned id
curl -XDELETE http://192.168.1.111:9200/tedu/teacher/1

curl -XDELETE http: // any cluster node: 9200 / Index Name
curl -XDELETE http://192.168.1.111:9200/tedu

curl -XDELETE http: // any cluster node: 9200 / *
curl -XDELETE http://192.168.1.111:9200/
authentication methods, all with the same

Third, import data (for the test, can be ignored)

Bulk import data using the POST method, the data format is json, json using URL encoded file containing index data-binary import configuration
1, must be used POST method
2, the data format must be json
. 3, the encoding format data-binary
. 4, using keywords _bulk import data

[root@ecs-mao1 ~]# scp  /var/ftp/public/logs.jsonl.gz [email protected]:/root  //把导入传给es节点
[root@eess-0001 ~]# gzip -d logs.jsonl.gz 
[root@eess-0001 ~]# ls
logs.jsonl
[root@eess-0001 ~]# curl -X POST "http://192.168.1.111:9200/_bulk"  \
> --data-binary @logs.jsonl

View Elasticsearch verification page

Fourth, the installation kibana

Data visualization platform tools
1, installed
yum install kibana

2, modify the configuration file / etc / the hosts
192.168.1.51 ES-0001
192.168.1.52 ES-0002
192.168.1.53 ES-0003
192.168.1.54 ES-0004
192.168.1.55 ES-0005
192.168.1.56 kibana

3, modify the configuration file /opt/kibana/config/kibana.yml
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://192.168.1.111:9200" // specify the address of elasticsearch
kibana.index: ".kibana"
kibana.defaultAppId: "the Discover"
elasticsearch.pingTimeout: 1500
elasticsearch.requestTimeout: 30000
elasticsearch.startupTimeout: 5000

4,启动clothes Affairs
Systemctl Start Kibana
Systemctl Enable Kibana

5. Check to verify port 5601
ss -ltun

Fifth, install and configure Logstash

Logstash more consumption of resources, and select 2CPU 4G server
Logstash configuration files need to configure their own
Logstash working structure
Here Insert Picture Description
using plug-ins required to view official documents
https://www.elastic.co/guide/en/logstash/current/index.html

1. Install Logstash

[root@logstash ~]#  yum -y install java-1.8.0-openjdk
[root@logstash ~]# yum -y install logstash
[root@logstash ~]# touch /etc/logstash/logstash.conf     //创建配置文件
[root@logstash ~]#  /opt/logstash/bin/logstash  --version
logstash 2.3.4
[root@logstash ~]# /opt/logstash/bin/logstash-plugin  list   //查看插件
...
logstash-input-stdin    //标准输入插件
logstash-output-stdout    //标准输出插件

2. Writing configuration file format

[root@logstash ~]# cat /etc/logstash/logstash.conf 
input{
  stdin{codec => "json"}
  file {
    path => ["/tmp/apache.log"]         //指定日志文件路径,这个是在本地
    sincedb_path => "/root/.sincedb"     //指定指针文件的保存路径(记录已读取文件的位置)
    start_position => "beginning"        //当没有指针文件时,默认从头开始读取
    type => "httplog"                    //标签
  }
  beats{                                 //监听5044端口,接收从5044发送过来的日志数据和标签,客户端会安装filebeat
    port => 5044                         //这属于远程接受日志信息,上面的属于本地

}
filter{
if [type] == "hlog" {                   //当收到标签apache_log时,就会如下操作
  grok {                                 //grok插件:解析各种非结构化的日志数据插件;
                                         //grok使用正则表达式把飞结构化的数据结构化
    match => { "message" => "%{COMBINEDAPACHELOG}" }  //使用作者写的正则
#/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-pattern-core-2.0.5/patterns/grok-patterns      //作者写的正则
  }}
}
output{
  stdout{ codec => "rubydebug" }         //输出设置为rubydebug
  if [type] == "hlog"{                  //当收到标签apache_log时,就会如下操作
  elasticsearch {                        //指定elasticsearch的地址;建立索引
                                         //当数据超过2000或者10秒时,发送给elasticsearch
    hosts => ["es-0001:9200", "es-0002:9200", "es-0003:9200"]
    index => "web-%{+YYYY.MM.dd}"
    flush_size => 2000                   //每当写2000字节的时候就写刷新一次
    idle_flush_time => 10                //空闲10秒就刷新一次
  }}

}
[root@logstash ~]# /opt/logstash/bin/logstash -f /etc/logstash/logstash.conf 
//启动并测试
Settings: Default pipeline workers: 2
Pipeline main started
aa        //logstash 配置从标准输入读取输入源,然后从标准输出输出到屏幕
2018-09-15T06:19:28.724Z logstash aa

3. Install the end web

[root@ecs-ff35 ~]# yum -y install filebeat
[root@ecs-ff35 ~]# vim /etc/filebeat/filebeat.yml
 ..........
      paths:         //指定日志路径
        - /var/log/httpd/access_log
...........
 # elasticsearch:
 #  hosts: ["localhost:9200"]
............
  logstash:          //指定logstash服务器地址
    # The Logstash hosts
    hosts: ["192.168.1.117:5044"]
.........
document_type: hlog  //打标签

[root@ecs-ff35 ~]# systemctl start filebeat

Verify:
On logstash server starts logstash service, access the web with other machines to see if the display and see if uploaded to the server Elasticsearch

[root@ecs-b486 ~]# /opt/logstash/bin/logstash -f /etc/logstash/logstash.conf 

Here Insert Picture Description
After the success with kibana charting (IP address +5601)

Sixth, get real source IP

Huawei using load balancing on the cloud, seen after the conversion IP, IP is now a real need to look at
the document
https://support.huaweicloud.com/test-usermanual-elb/zh-cn_topic_0172675020.html

HTTP service:

Acquiring real layer 7 the IP
. 1, is provided using the HTTP protocol elb the listener
2, disposed at the rear end to increase the apache
[root @ ecs-web ~] # cat /etc/httpd/conf.modules.d/00-remoteip .conf
LoadModule Remote
ip_module modules / mod_remoteip.so
RemoteIPHeader the X-Forwarded-the For-
RemoteIPInternalProxy 100.125.0.0/16

3, comment out the 00-base.conf in
#LoadModule remoteip_module modules / mod_remoteip.so

4, modify the configuration file httpd.conf
LogFormat "% a% h% l % u% t"% r "%> s% b"% {Referer} i ""% {User-Agent} i "" combined
due to the extra an IP, need on logstash server, modify grok regular

5, test configuration file syntax no problems when restarting services
apachectl -t
systemctl restart httpd
tail -f / var / log / httpd / access_log view the log
################### #########################################

nginx Service


重新编译添加 realip 参数
./configure --prefix=/usr/local/nginx --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module

编辑 nginx.conf 添加
set_real_ip_from 100.125.0.0/16;
real_ip_header X-Forwarded-For;

Restart the service
/ usr / local / nginx / sbin / nginx -s reload

############################################################

Four load balancing

mkdir -p mkdir -p /lib/modules/ ( in n a m e r ) / e x t r a / n e t / t O a c p t O a . k O . x from / l i b / m O d in l e s / (Uname -r) / extra / net / male toa.ko.xz CP / lib / Ngā Kōwae / (uname -r)/extra/net/toa/
depmod -a
lsmod
modinfo toa

Restart the service
/ usr / local / nginx / sbin / nginx -s reload

Published 27 original articles · won praise 2 · Views 795

Guess you like

Origin blog.csdn.net/f5500/article/details/105082572