1-ELK+ Elasticsearch+head+kibana, enterprise internal log analysis system

ELK: log collection platform

ELK consists of three open source tools : ElasticSearch , Logstash and Kibana:
insert image description here

Concept map  

Component introduction

1、Elasticsearch:

ElasticSearch is an open source distributed search service based on Lucene . Search and analyze logs only

Features: distributed, zero configuration, automatic discovery, automatic index fragmentation, index copy mechanism, multiple data sources, etc. It provides a full-text search engine with distributed multi-user capabilities. Elasticsearch, developed in Java and released as open source under the terms of the Apache license, is the second most popular enterprise search engine. Designed for cloud computing, it can achieve real-time search, stable, reliable, fast, and easy to install and use.
In the elasticsearch cluster, the data of all nodes is equal.

index (library) –> type (table) –> document/log (record)

2、Logstash:

 Logstash是一个完全开源工具,可以对你的日志进行收集、过滤、分析,并将其存储供以后使用(如,搜索),logstash带有一个web界面,搜索和展示所有日志。  **只收集和过滤日志,和改格式**

简单来说logstash就是一根具备实时数据传输能力的管道,负责将数据信息从管道的输入端传输到管道的输出端;与此同时这根管道还可以让你根据自己的需求在中间加上滤网,Logstash提供很多功能强大的滤网以满足你的各种应用场景。

② Logstash的事件(logstash将数据流中等每一条数据称之为一个事件)处理流水线有三个主要角色完成:inputs –> filters –> outputs:

 The entire workflow of logstash is divided into three stages: input, filter, and output. Each stage is supported by powerful plug-ins :
Input must be responsible for generating events (Inputs generate events), commonly used plug-ins are**

- file 从文件系统收集数据
- syslog 从syslog日志收集数据
- redis 从redis收集日志
- beats 从beats family收集日志(如:Filebeats)

Filter commonly used plug-ins are responsible for data processing and conversion (filters modify them)

- grok是logstash中最常用的日志解释和结构化插件。:grok是一种采用组合多个预定义的正则表达式,用来匹配分割文本并映射到关键字的工具。
- mutate 支持事件的变换,例如重命名、移除、替换、修改等
- drop 完全丢弃事件
- clone 克隆事件

output output, must, responsible for data output (outputs ship them elsewhere), commonly used plug-ins are

- elasticsearch 把数据输出到elasticsearch
- file 把数据输出为普通的文件

3、Kibana:

Kibana is a browser-based front-end display tool for Elasticsearch. It is also an open source and free tool. Kibana can provide Logstash and ElasticSearch with a friendly web interface for log analysis, which can help you summarize, analyze and search important data logs.

2. Environment introduction

| 安装软件                 |        主机名         |     IP地址      |    系统版本    |
| ------------------------ | :-------------------: | :-------------: | :------------: |
| Elasticsearch/           |       mes-1-zk        | 192.168.246.234 | centos7.4--3G  |
| zookeeper/kafka/Logstash |      es-2-zk-log      | 192.168.246.231 | centos7.4--2G  |
| head/Kibana              | es-3-head-kib-zk-File | 192.168.246.235 | centos7.4---2G |

All machines close the firewall, selinux

3. Version Description

Elasticsearch: 6.5.4  #https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.5.4.tar.gz
Logstash: 6.5.4  #https://artifacts.elastic.co/downloads/logstash/logstash-6.5.4.tar.gz
Kibana: 6.5.4  #https://artifacts.elastic.co/downloads/kibana/kibana-6.5.4-linux-x86_64.tar.gz
Kafka: 2.11-2.1  #https://archive.apache.org/dist/kafka/2.1.0/kafka_2.11-2.1.0.tgz
Filebeat: 6.5.4
相应的版本最好下载对应的插件

Related address:

Official website address: https://www.elastic.co

Official website construction: https://www.elastic.co/guide/index.html

 

 

select version number

 second plugin

 third plugin

Implementation deployment

1. Elasticsearch deployment

系统类型:Centos7.4
节点IP:172.16.246.234
软件版本:jdk-8u191-linux-x64.tar.gz、elasticsearch-6.5.4.tar.gz
示例节点:172.16.246.234

1. Install and configure jdk8, dependent packages

ES operation depends on jdk8 ----- all three machines operate, first upload jdk1.8

[root@mes-1 ~]# tar xzf jdk-8u191-linux-x64.tar.gz -C /usr/local/
[root@mes-1 ~]# cd /usr/local/
[root@mes-1 local]# mv jdk1.8.0_191/ java

Write environment variables
[root@mes-1 local]# vim etc/profile

JAVA_HOME=/usr/local/java
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME PATH

Refresh to view the version

[root@mes-1 ~]# source /etc/profile
[root@mes-1  local]# java -version
java version "1.8.0_211"
Java(TM) SE Runtime Environment (build 1.8.0_211-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.211-b12, mixed mode)

2. Install and configure ES----only in the part below the first operation operation
(1) Create ordinary users who run ES

[root@mes-1 ~]# useradd elsearch
[root@mes-1 ~]# passwd elsearch  123456

(2) Install and configure ES

[root@mes-1 ~]# tar xzf elasticsearch-6.5.4.tar.gz -C /usr/local/
[root@mes-1 ~]# cd /usr/local/elasticsearch-6.5.4/config/
[root@mes-1 config]# ls
elasticsearch.yml  log4j2.properties  roles.yml  users_roles
jvm.options        role_mapping.yml   users
[root@mes-1 config]# cp elasticsearch.yml elasticsearch.yml.bak

[root@mes-1 config]# vim elasticsearch.yml ----Find a place to add the following content, prompting that the last two lines should not be commented out, if commented out, the second machine will not be able to monitor 

cluster.name: elk
node.name: elk01
node.master: true
node.data: true
path.data: /data/elasticsearch/data
path.logs: /data/elasticsearch/logs
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
network.host: 0.0.0.0
http.port: 9200
#discovery.zen.ping.unicast.hosts: ["192.168.246.234", "192.168.246.231","192.168.246.235"]
#discovery.zen.minimum_master_nodes: 2
#discovery.zen.ping_timeout: 150s
#discovery.zen.fd.ping_retries: 10
#client.transport.ping_timeout: 60s
http.cors.enabled: true
http.cors.allow-origin: "*"

Configuration item meaning:

cluster.name        集群名称,各节点配成相同的集群名称。
node.name       节点名称,各节点配置不同。
node.master     指示某个节点是否符合成为主节点的条件。
node.data       指示节点是否为数据节点。数据节点包含并管理索引的一部分。
path.data       数据存储目录。
path.logs       日志存储目录。
bootstrap.memory_lock       内存锁定,是否禁用交换。
bootstrap.system_call_filter    系统调用过滤器。
network.host    绑定节点IP。
http.port       端口。
discovery.zen.ping.unicast.hosts    提供其他 Elasticsearch 服务节点的单点广播发现功能。
discovery.zen.minimum_master_nodes  集群中可工作的具有Master节点资格的最小数量,官方的推荐值是(N/2)+1,其中N是具有master资格的节点的数量。
discovery.zen.ping_timeout      节点在发现过程中的等待时间。
discovery.zen.fd.ping_retries        节点发现重试次数。
http.cors.enabled               是否允许跨源 REST 请求,用于允许head插件访问ES。
http.cors.allow-origin              允许的源地址。

(3) Set the JVM heap size
The first method
/usr/local/elasticsearch-6.5.4/config
[root@mes-1 config]# vim jvm.options ---- will

-Xms1g    ----修改成 -Xms2g
-Xmx1g    ----修改成 -Xms2g

For the second method, use one.
The recommended setting is 4G. Please pay attention to the following instructions: When the computer is suitable

sed -i 's/-Xms1g/-Xms4g/' /usr/local/elasticsearch-6.5.4/config/jvm.options
sed -i 's/-Xmx1g/-Xmx4g/' /usr/local/elasticsearch-6.5.4/config/jvm.options
注意:
确保堆内存最小值(Xms)与最大值(Xmx)的大小相同,防止程序在运行时改变堆内存大小。
堆内存大小不要超过系统内存的50%

(4) Create ES data and log storage directory
Because this directory is configured above, it is required to create

[root@mes-1 ~]# mkdir -p /data/elasticsearch/data       (/data/elasticsearch)
[root@mes-1 ~]# mkdir -p /data/elasticsearch/logs       (/log/elasticsearch)

(5) Modify the installation directory and storage directory permissions

[root@mes-1 ~]# chown -R elsearch:elsearch /data/elasticsearch
[root@mes-1 ~]# chown -R elsearch:elsearch /usr/local/elasticsearch-6.5.4

3. System optimization (1) Increase the maximum number of open files

In the first method, one can suggest the first
permanent method

echo "* - nofile 65536" >> /etc/security/limits.conf

The second method
(2) increase the maximum number of processes
vim /etc/security/limits.conf - add the following content at the end of the file

* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096
更多的参数调整可以直接用这个

解释:
soft  xxx  : 代表警告的设定,可以超过这个设定值,但是超过后会有警告。
hard  xxx  : 代表严格的设定,不允许超过这个设定的值。
nofile : 是每个进程可以打开的文件数的限制
nproc  : 是操作系统级别对每个用户创建的进程数的限制

(3) Increase the maximum number of memory maps (adjust the strategy of using the swap partition)
The first method is permanently effective
vim /etc/sysctl.conf — add the following

vm.max_map_count=262144
vm.swappiness=0

[root@mes-1 ~]# sysctl -p
explanation: In the case of insufficient memory, use swap space. check it out

The second method, temporarily add

 sysctl -w vm.max_map_count=262144

Increase the memory space used by users (temporary)

Start ES


Switch to the first method below the created user

[root@mes-1 ~]# su - elsearch
Last login: Sat Aug  3 19:48:59 CST 2019 on pts/0
[root@mes-1 ~]$ cd /usr/local/elasticsearch-6.5.4/
[root@mes-1 elasticsearch-6.5.4]$ ./bin/elasticsearch #先启动看看报错不,需要多等一会
终止之后
[root@mes-1 elasticsearch-6.5.4]$ nohup ./bin/elasticsearch &  #放后台启动 建议用这个
[1] 11462
nohup: ignoring input and appending output to ‘nohup.out’

The second method

su - elsearch -c "cd /usr/local/elasticsearch-6.5.4 && nohup bin/elasticsearch &"

Switch back to the root user to check

[root@mes-1 elasticsearch-6.5.4]$ tail -f nohup.out   #看一下是否启动

Test: browser access http://192.168.246.234:9200

Start if the following error is reported

memory locking requested for elasticsearch process but memory is not locked
elasticsearch.yml文件
bootstrap.memory_lock : false
/etc/sysctl.conf文件
vm.swappiness=0

错误:
max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]

意思是elasticsearch用户拥有的客串建文件描述的权限太低,知道需要65536个

solve:

Switch to vim /etc/security/limits.conf under the root user

在最后添加
* hard nofile 65536
* hard nofile 65536
重新启动elasticsearch,还是无效?
必须重新登录启动elasticsearch的账户才可以,例如我的账户名是elasticsearch,退出重新登录。
另外*也可以换为启动elasticsearch的账户也可以,* 代表所有,其实比较不合适

启动还会遇到另外一个问题,就是
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
意思是:elasticsearch用户拥有的内存权限太小了,至少需要262114。这个比较简单,也不需要重启,直接执行
# sysctl -w vm.max_map_count=262144
就可以了

1. Install the Head plug-in on the second machine

Premise : The head plugin is implemented by Nodejs, so you need to install Nodejs first.
1.1 Install nodejs
Official nodejs download address: https://nodejs.org/

Download linux64 bit:

[root@es-3-head-kib ~]# wget https://nodejs.org/dist/v14.17.6/node-v14.17.6-linux-x64.tar.xz
[root@es-3-head-kib ~]# tar xf node-v14.17.6-linux-x64.tar.xz -C /usr/local/

vim /etc/profile add the following configuration

NODE_HOME=/usr/local/node-v14.17.6-linux-x64
JAVA_HOME=/usr/local/java
PATH=$NODE_HOME/bin:$JAVA_HOME/bin:$PATH
export JAVA_HOME PATH

#Because ES is also installed on this machine here, so the environment variable is configured like this; the configuration of jdk cannot be deleted

[root@es-3-head-kib ~]# source /etc/profile   #刷新一下
[root@es-3-head-kib ~]# node --version        #查看一下版本
v14.17.6
[root@es-3-head-kib ~]# npm -v                #查看一下版本
6.14.15

npm is a package management tool installed with NodeJS, which can solve many problems in NodeJS code deployment.

1.2 install git

You need to use git to download the head plugin, and install git as follows:

[root@es-3-head-kib local]# yum install -y git
[root@es-3-head-kib local]# git --version
git version 1.8.3.1

1.3 Download and install the head plugin

[root@es-3-head-kib ~]# cd /usr/local/
[root@es-3-head-kib local]# git clone git://github.com/mobz/elasticsearch-head.git

The first method is to replace the website with domestic Taobao and download directly

[root@es-3-head-kib local]# cd elasticsearch-head/
可以将npm源设置为国内淘宝的,确保能下载成功
[root@es-3-head-kib elasticsearch-head]# npm install -g cnpm --registry=https://registry.npm.taobao.org
[root@es-3-head-kib elasticsearch-head]# npm install #报错,不用管它

The second method, if your network is good, you can download directly

[root@es-3-head-kib elasticsearch-head]# npm install   #注意:这里直接安装,可能会失败,如果你的网络没问题,才能下载成功

Modify the address: If your ... and ES are not on the same machine, you need to make the following two modifications. If you are on the same machine, you can not modify it

[root@es-3-head-kib elasticsearch-head]# vim Gruntfile.js

[root@es-3-head-kib elasticsearch-head]# vim _site/app.js #配置连接es的ip和port

 

1.4 Configure elasticsearch to allow the head plugin to access
the first machine. Let’s configure the last two lines above to ensure that they are open

[root@es-3-head-kib ~]# vim /usr/local/elasticsearch-6.5.4/config/elasticsearch.yml
在配置最后面,加2行

Then, restart elasticsearch

1.5 Test

Go to the head directory and execute npm run start

[root@es-3-head-kib ~]# cd /usr/local/elasticsearch-head/
[root@es elasticsearch-head]# nohup npm  run start &
netstat -lntp  # 过滤一下端口

 After the startup is successful, visit http://192.168.153.190:9100/ in the browser, enter http://192.168.153.190:9200/ inside, click the connection test, and output the green background font to indicate that the configuration is OK.

 

Error reporting ideas

If the last startup is not started,
nohup npm run start & filter port can not be filtered out

suggestion

 

Guess you like

Origin blog.csdn.net/qq_50660509/article/details/129988561