Unified log in microservices - ELK

1. Introduction

1 Introduction

ELK is the abbreviation of Elasticsearch, Logstash, and Kibana. To be precise, it is ELKB, that is, ELK + Filebeat, where Filebeat is a lightweight transmission tool for forwarding and centralizing log data. Elasticsearch is a real-time full-text search and analysis engine that provides three functions of collecting, analyzing, and storing data, and is a scalable distributed system.

2. Process

filebeat collects logs >>> Logstash processes filtered data >>> Elasticsearch stores query data >>> Kibana view logs (visualization)

Filebeat can go directly to Elasticsearch, but it will take up a lot of resources of es, so es should focus on querying and processing data

insert image description here

3. Requirements

Check the version support of es: https://www.elastic.co/cn/support/matrix#matrix_jvm

1. Elasticsearch supports JDK1.8, only 7.17.x and previous versions. If you download the latest version, at least JDK17 and above
2.ElasticSearch starts from version 5.x. For security reasons, you cannot use the root user to enable it directly. 3.
The minimum version of
jdk is 1.8.0_131.

Because what I installed on this machine is java version "1.8.0_144", in order to be compatible with jdk, I will not choose the latest 8.5.2 and choose the version 7.16.3 lower than 7.17.x. Other versions should also be unified, so I choose 7.16.3

I am using a virtual machine here, the version is CentOS Linux release 7.9.2009 (Core), the memory is 2g, the virtual machine ip address is 192.168.56.11(don’t think about it, this is a virtual machine started by a computer, only those on the same network segment as the computer can access my virtual machine, if you are a server Don’t expose the intranet IP casually, the following is just to build the elk test, and the password is not considered, it will be added later, it is very simple)
I have two virtual machines here, 10 and 11, here is a simple setup, without multi-node distribution or anything (in the case of a large amount of log data and complex processing, it is necessary to build Logstash multi-node processing data, es can also Build distributed to reduce individual pressure), here is only for testing.

server built environment
11 Elasticsearch ,Kibana,Logstash,Filebeat
10 Filebeat

4. Download address

Elasticsearch:https://www.elastic.co/cn/downloads/elasticsearch

Logstash: https://www.elastic.co/cn/downloads/logstash

Kibana: https://www.elastic.co/cn/downloads/kibana

Filebeat: https://www.elastic.co/cn/downloads/beats/filebeat

It can be downloaded on the computer and then uploaded to the server, or it can be downloaded on the server using the wget command. Here, the wget command is used to download. Here, the directory of the server is selected. This is the test environment. I will close the firewall. If it is official, it will /usr/local/elkbe To open the port

turn off firewall

systemctl stop firewalld

2. Install Elasticsearch

1. Create a file storage directory

mkdir /usr/local/elk

2. Enter the directory

cd /usr/local/elk/

3. Download

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.16.3-linux-x86_64.tar.gz

4. Unzip

Then you can delete the compressed package

tar zxvf elasticsearch-7.16.3-linux-x86_64.tar.gz

access file

cd elasticsearch-7.16.3

create elkuser

adduser elk

Grant file permissions to users

chown -R elk:elk /usr/local/elk/elasticsearch-7.16.3

Create storage directory here

mkdir /home/elk/data & mkdir /home/elk/logs
chown -R elk:elk /home/elk

5. Modify the configuration

5.1. Introduction

Elasticsearch has three configuration files: These files are located in the config directory

  • elasticsearch.ymlUsed to configure Elasticsearch
  • jvm.optionsUsed to configure Elasticsearch JVM settings
  • log4j2.propertiesUsed to configure Elasticsearch logging

es limits on files, thread creation permissions require 4096, and file creation limits 65536 (it requires so many, not directly creating so many)

Linux can create and assign to ordinary users by default

  • Thread creation is 1024
  • The file creation limit is 65535,
    so it needs to be modified

5.2. System configuration

Modify file/thread limit

vi /etc/security/limits.conf

Add the following content at the end of the file
Some people will report an error
max number of threads [xxx] for user [xxx] is too low, increase to at least [4096]
The reason is that there is no configuration here, and there is less than 1.5g of Linux memory

*		soft	nofile		65536
*		hard	nofile		65536

*		soft	nproc		4096
root		soft	nproc		unlimited

# End of file

Linux does not enable virtual memory by default, you need to manually modify the constraints
es requires a minimum of 65536 bytes of virtual memory (about 65k...), here is a larger 6553600 (about 6m...)

vi /etc/sysctl.conf

Add the following at the end of the file

vm.max_map_count=6553600

load configuration

sysctl -p

insert image description here

nofile corresponds to open_files, and nproc corresponds to max_user_processes. Because it is a new user, this needs to be queried in the newly created user. If it is modified, it must be modified in root (the created user has no modification permission and is only used to start es)insert image description here

5.3 Modify es configuration

modify elasticsearch.ymlfile

vi config/elasticsearch.yml

Because the code in the configuration file is all commented, you can remove the comment for him here, and add it if there is no

#集群名称
cluster.name: my-cluster
#节点名称
node.name: node-1
#是不是有资格主节点
#node.master: true
#是否存储数据
#node.data: true
#最大集群节点数
#node.max_local_storage_nodes: 2

#存储数据/日志路径,后续在进行创建目录
path.data: /home/elk/data
path.logs: /home/elk/logs
#可以访问的ip地址,这里根据自己需要定义
network.host: 0.0.0.0
#访问端口号
http.port: 9200
#这里我放的是自己虚拟机的ip
discovery.seed_hosts: ["192.168.56.11"]
cluster.initial_master_nodes: ["node-1"]
#节点类型:单节点
#discovery.type: single-node

#如果启用X-Pack,使用密码功能时,去掉xpack.security.enabled: true的注释
#当es是单节点的时候就取消discovery.type注释,给cluster.initial_master_nodes加上注释
#如果不是就不需要管

#启用X-Pack,开启账号密码
#xpack.security.enabled: true

#为 HTTP API 客户端连接启用加密,例如 Kibana、Logstash 和 Agents
#xpack.security.http.ssl:
#  enabled: false
#  keystore.path: certs/http.p12

#启用集群节点之间的加密和相互认证
#xpack.security.transport.ssl:
#  enabled: false

#关闭geoip数据库的更新
ingest.geoip.downloader.enabled: false

#设置为false禁用X-Pack机器学习功能
xpack.ml.enabled: false

modify jvm.optionsfile

vi config/jvm.options

It is about 31 lines, modify it according to your own needs and configuration

-Xms1g
-Xmx1g

5.4 start, test

switch user

su elk

start es

bin/elasticsearch

test

curl 127.0.0.1:9200

insert image description here

Because the firewall is opened, it can also be accessed externally (open ports are also available), ip+port number

insert image description here

3. Install Kibana

1. Enter the directory

Because an es was run before, a new link window needs to be created here

cd /usr/local/elk

2. Download

wget https://artifacts.elastic.co/downloads/kibana/kibana-7.16.3-linux-x86_64.tar.gz

3. Unzip

tar zxvf kibana-7.16.3-linux-x86_64.tar.gz

4. Modify the configuration

4.1 Introduction

Kibana is a tool for searching, viewing, and interactively operating data in the Elasticsearch index. It can easily use charts, tables, etc. to analyze and present data in a variety of ways

4.2 Modify kibana configuration

access file

cd kibana-7.16.3-linux-x86_64

Change setting

vi config/kibana.yml

The content in the file is commented out and changed to the following content

#端口号
server.port: 5601
#可以访问的ip,这里可以指定一个固定的访问地址
server.host: "0.0.0.0"
#用于配置对外访问地址,如果不配置页面会有警告信息
server.publicBaseUrl: "http://192.168.56.11:5601"
#服务名
server.name: "kibana-demo"
#下面是之前那个es的地址
elasticsearch.hosts: ["http://192.168.56.11:9200"]
#kibana使用es中的索引来存储保存的检索,可视化控件以及仪表板.如果没有,Kibana就会创建一个新的索引
kibana.index: ".kibana"
#下面这个开启之后界面就变成中文了,一些设置展示除外
#i18n.locale: zh-CN
# 显示登陆页面
xpack.monitoring.ui.container.elasticsearch.enabled: true

4.3 start, test

Give the user file permissions, because kibana cannot run under the root user

chown -R elk:elk /usr/local/elk/kibana-7.16.3-linux-x86_64

switch user

su elk

Start, currently under /usr/local/elk/kibana-7.16.3-linux-x86_64the directory

bin/kibana

Test, browser input can be accessed, click here to browse by yourself, if the above configuration is not used to i18n.locale: zh-CNenable Chinese, the browser should have a translation function, anyway, just click here
insert image description here

4. Install Logstash

1. Enter the directory

Create a link window here

cd /usr/local/elk

2. Download

wget https://artifacts.elastic.co/downloads/logstash/logstash-7.16.3-linux-x86_64.tar.gz

3. Unzip

tar zxvf logstash-7.16.3-linux-x86_64.tar.gz

4. Modify the configuration

4.1 Introduction

Logstash is a powerful data processing tool, which can realize data transmission, format processing, format output, data filtering, etc., and is often used for log processing

4.2 Modify logstash configuration

access file

cd logstash-7.16.3

logstash-sample.confModify the configuration file according to the template

cp config/logstash-sample.conf config/logstash-demo.conf

Modify the logstash-demo.conf file

vi config/logstash-demo.conf

The content is as follows

# beats传入的端口,默认5044
input {
    
    
  beats {
    
    
    port => 5044
  }
}

# 输出日志的方式
output {
    
     
# 按照日志标签对日志进行分类处理,日志标签后续会在filebeat中定义,区分日志
 if "demo1-log" in [tags] {
    
    
    elasticsearch {
    
    
      #传输地址
      hosts => ["http://192.168.56.11:9200"]
      #索引
      index => "[demo1-log]-%{+YYYY.MM.dd}"
      #es的账号密码,根据需要进行设置
      #user => "elastic"
      #password => "a123456"
    }
 }
 if "demo2-log" in [tags] {
    
    
    elasticsearch {
    
    
      #传输地址
      hosts => ["http://192.168.56.11:9200"]
      index => "[demo2-log]-%{+YYYY.MM.dd}"
    }
 }
}

Modify the logstash.yml file
Because by default Logstash uses an in-memory queue to cache events between stages of the pipeline, if an unexpected termination occurs, the in-memory events will be lost. In order to prevent data loss, you can enable queue.type: persistedthe configuration of Logstash to persist running events to disk. The following settings path.queuecan not be set, because the default is in the data directory

queue.type: persisted

#以下都可以不用设置,使用默认的就可以
path.queue: /usr/local/elk/logstash-7.16.3/data    #队列存储路径;如果队列类型为persisted,则生效
queue.page_capacity: 200mb         #队列为持久化,单个队列大小,根据实际需要进行配置
queue.max_bytes: 1000mb           #队列最大容量
queue.max_events: 0               #当启用持久化队列时,队列中未读事件的最大数量,0为不限制
queue.checkpoint.acks: 1024       #在启用持久队列时强制执行检查点的最大数量,0为不限制
queue.checkpoint.writes: 1024     #在启用持久队列时强制执行检查点之前的最大数量的写入事件,0为不限制
queue.checkpoint.interval: 1000   #当启用持久队列时,在头页面上强制一个检查点的时间间隔

4.3 start

start up

bin/logstash -f config/logstash-demo.conf 

You will find that the following code appears in the connection window, indicating that the connection is successful.
Starting server on port: 5044

5. Install Filebeat

1. Enter the directory

New link window

cd /usr/local/elk

2. Download

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.16.3-linux-x86_64.tar.gz

3. Unzip

tar zxvf filebeat-7.16.3-linux-x86_64.tar.gz

4. Modify the configuration

4.1 Introduction

Filebeat is a lightweight log file collection tool. Filebeat will monitor the log directory or specified log files, and forward the information to es, logstarsh, redis, etc. for storage.

4.2 Modify filebeat configuration

access file

cd filebeat-7.16.3-linux-x86_64

11 server, modify the configuration, the interval is two spaces, here is the intercepted java log, here multiline: this piece is filtered according to the time in java into one
insert image description here

vi filebeat.yml
# 从日志文件输入日志
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /opt/jar/log/*.log
  tags: ["demo1-log"]
  #排除空行
  #exclude_lines: ['^$']
  multiline:
    type: pattern
    pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
    negate: true
    match: after
setup.template.settings:
  #设置主分片数
  index.number_of_shards: 1
  #因为测试环境只有一个es节点,所以将副本分片设置为0,否则集群会报黄
  index.number_of_replicas: 0
#下面的Elasticsearch 没有注释需要注释掉
# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]
#输出到logstash,这个是注释掉的要去掉注释
output.logstash:
  #logstash所在服务器的ip和端口
  hosts: ["192.168.56.11:5044"]

10 server, modify the configuration, and the configuration of the 11 server is the same, but the tags inside are different

# 从日志文件输入日志
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /opt/jar/log/*.log
  tags: ["demo2-log"]
  #排除空行
  #exclude_lines: ['^$']
  multiline:
    type: pattern
    pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
    negate: true
    match: after
setup.template.settings:
  #设置主分片数
  index.number_of_shards: 1
  #因为测试环境只有一个es节点,所以将副本分片设置为0,否则集群会报黄
  index.number_of_replicas: 0
#下面的Elasticsearch 没有注释需要注释掉
# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]
#输出到logstash,这个是注释掉的要去掉注释
output.logstash:
  #logstash所在服务器的ip和端口
  hosts: ["192.168.56.11:5044"]

4.3 start

Here first start the 11 server for testing and viewing

 ./filebeat -e -c filebeat.yml 

Because the directory for collecting logs is /opt/jar/log/*.log
here to create a file upload jar package

mkdir -p /opt/jar/log

Here I upload the jar to /opt/jarthe directory and store the logs in /opt/jar/logthe directory.
My code does a simple test

server:
  port: 9001
spring:
  application:
    name: demo1
package com.ly.demo1.controller;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class demo1 {
    
    
    private static final Logger LOG = LoggerFactory.getLogger(demo1.class);

    @GetMapping("/info")
    public String info(){
    
    
        LOG.info("[demo1 log]"+"info");
        return "info";
    }

    @GetMapping("/err")
    public String error(){
    
    
        LOG.error("[demo1 error]"+"error");
        int i = 1/0;
        return "error";
    }

}
nohup java -jar demo1.jar > log/a.log 2>&1 &

After starting the jar package, we first open the 5601 port
insert image description here
to check and find that an index has appeared.
insert image description here
It did not exist before the jar package was started.
insert image description here
It is not possible to search logs directly through the index. You need to create
insert image description herethis name first and the left side is The same as the timestamp, select the first @timestamp
insert image description here
insert image description here
access 192.168.56.11:9001/infoto view the log.
insert image description here
You can filter according to the left, and then you can see the output log.
insert image description here
You can also view the error report
insert image description here

6. Note that the 'host.ip' inside can check which server sent the error

Now also start the jar and filebeat services in the 10 virtual machine
insert image description here
insert image description here
. So far, it will end directly

7. Set password

1.es password operation

Add an account password to es, because sometimes you don’t want others to access, but they can access it, so add a password to prevent them from entering, the following settings are in es 5.3, and now you can start es insert image description here
( Using the elk user we created), I found that I need to enter the account password (but I don’t know the password, huh), don’t worry about the next step, now create a new window,
insert image description here
enter the es directory (currently the root user), and then enter the code Perform password generation operations
Here you can use random generated passwords to the console, or you can customize passwords (the code cannot be generated if it is successfully generated, but it can be modified later)

randomly generated

./bin/elasticsearch-setup-passwords auto

Custom (do not be pure numbers, the password is 6 digits, otherwise there will be errors later),
I use a custom password to generate a custom password under the root user, the password is first created as a12345 (here you need to enter the password of six software , and then repeat the password for each one)

./bin/elasticsearch-setup-passwords interactive

insert image description here
Then enter the account password on the web page to log in.
The account is elastic and the password is a12345
insert image description here

2.kibana password operation

Remove the comment of this kibana.ymlin the file elasticsearch.username: "kibana_system", and if you are more careful, you will find that the password has been set just now, that’s right.

elasticsearch.username: "kibana_system"

Then create a keystore (under the kibana directory, elk user)

./bin/kibana-keystore create

Add the user's password kibana_systemto Kibanathe keystore, and when prompted, enter kibana_systemthe user's password.

./bin/kibana-keystore add elasticsearch.password

Then start Kibanathe test (started by the created elk user)
insert image description here
using the account elasticpassworda123456
insert image description here

3. logstash configuration

View logstash configuration 4.2, modify the file copied and written under the config folder to add the account password

      user => "elastic"
      password => "a12345"

Then the log collection can also proceed normally.
(These must be activated. The ones who set the password are es, K, and L.
F just collect logs locally and send them to L without setting a password)

Guess you like

Origin blog.csdn.net/weixin_45853881/article/details/128286958