ELKF (Elasticsearch+Logstash+ Kibana+ Filebeat) deploys Filebeat on multiple machines

ELKF Deployment Structure
insert image description here

1. Install Elasticsearch and configure

1.1. Unzip the installation package to the installation path.

For example, under /usr/local/elk

tar -zxvf elasticsearch-6.3.0-linux-x86_64.tar.gz -C /usr/local/elk/
或者直接解压
tar -zxvf elasticsearch-6.3.0-linux-x86_64.tar.gz 

1.2 Modify the configuration file

vi  /usr/local/elk/elasticsearch-6.3.0/config/elasticsearch.yml
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /data/elasticsearch/data
#
# Path to log files:
#
path.logs: /data/elasticsearch/logs
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0           ##服务器ip 本机
#
# Set a custom port for HTTP:
#
http.port: 9200                 ##服务端口
#
# For more information, consult the network module documentation.

The directories /data/elasticsearch/data and /data/elasticsearch/logs need to be created by themselves

1.3 Modify the system restriction configuration file

1.3.1 Modify the sysctl.conf file

vim /etc/sysctl.conf
在文件后面加入   vm.max_map_count = 655360

insert image description here
reload configuration

sysctl -p /etc/sysctl.conf

1.3.2 Modify the limits.conf file

vim /etc/security/limits.conf

Add the following at the end of the file

#<domain>      <type>  <item>         <value>
#

#*               soft    core            0
#*               hard    rss             10000
#@student        hard    nproc           20
#@faculty        soft    nproc           20
#@faculty        hard    nproc           50
#ftp             hard    nproc           0
#@student        -       maxlogins       4

# End of file

* soft nofile 65536
* hard nofile 131072
* soft nproc  65536
* hard nproc  131072

1.3.3 Setting User Resource Parameters

vim /etc/security/limits.d/20-nproc.conf

add the following content

es_user  soft  nproc   65536

1.3 start elasticsearch

/data/elasticsearch/ This directory is configured in the configuration file elasticsearch.yml

Pay attention to the startup of Elastic Search: Since the startup of ES cannot be started directly with the root account, it is necessary to create a new user, and then switch to the new user to start. The execution command is as follows:

-- 创建新用户及授权
 useradd  es_user
 groupadd esgroup
 chown -R es_user:esgroup elasticsearch-6.3.0
 chown -R es_user:esgroup /data/elasticsearch/
-- 切换用户,启动
 su es_user
 cd elasticsearch-6.3.0/bin
=======================================================
-- 两种方式启动
/usr/local/elk/elasticsearch-6.3.0/bin/elasticsearch   #命令窗运行
/usr/local/elk/elasticsearch-6.3.0/bin/elasticsearch  -d  #后台线程运行
=======================================================================
-- 对应关闭命令
ctrl+c              #命令窗关闭                     

ps -ef | grep elastic                    #后台线程关闭
kill -9 [pid]                             ##pid为查处线程的pid

Common problem solving elasticsearch startup common problem

2. Install Logstash and configure

2.1 Unzip the file

tar -zvxf /usr/local/elk/logstash-7.2.0.tar.gz -C /usr/local/elk/
或者
tar -zvxf /usr/local/elk/logstash-7.2.0.tar.gz

2.2 Create a new logs file

mkdir   /usr/local/elk/logstash/logs

2.3 Modify the logstash.conf file

cd config
--新建 flexcc-logstash.conf 文件
touch flexcc-logstash.conf

Add the following content

PS: localhsot needs to be changed to the server address where your Elasticsearch is installed.

input {
    
    
    beats {
    
    
        port => 18401 #filebeat端口
    }
}


filter {
    
    

    if [fields][log-source] == "callin-server" {
    
    
                json {
    
    
            source => "message"
            #target => "doc"
            #remove_field => ["message"]
                }
        }

        if [fields][log-source] == "callout-server" {
    
    
                json {
    
    
            source => "message"
            #target => "doc"
            #remove_field => ["message"]
                }
        }

        if [fields][log-source] == "sysmanager-server" {
    
    
                json {
    
    
            source => "message"
            #target => "doc"
            #remove_field => ["message"]
                }
        }

        if [fields][log-source] == "report-server" {
    
    
                json {
    
    
            source => "message"
            #target => "doc"
            #remove_field => ["message"]
                }
        }
 
output {
    
    

        if [fields][log-source] == "callin-server" {
    
    
          elasticsearch {
    
    
               hosts => ["localhsot:18200"]
               index => "rpc-callin-%{+YYYY.MM.dd}"
               user => admin
               password => admin
          }
    }

        if [fields][log-source] == "callout-server" {
    
    
          elasticsearch {
    
    
               hosts => ["localhost:18200"]
               index => "rpc-callout-%{+YYYY.MM.dd}"
               user => admin
               password => admin
          }
    }

        if [fields][log-source] == "sysmanager-server" {
    
    
          elasticsearch {
    
    
               hosts => ["localhost:18200"]
               index => "rpc-sys-%{+YYYY.MM.dd}"
               user => admin
               password => admin
          }
    }


        if [fields][log-source] == "report-server" {
    
    
          elasticsearch {
    
    
               hosts => ["localhost:18200"]
               index => "rpc-report-%{+YYYY.MM.dd}"
               user => admin
               password => admin
          }
    }
}
}

2.4 Modify logstash.sh file

cd  /usr/local/elk/logstash-6.3.1/bin
touch flexcc-logstash.sh

Add the following content
The path must be determined according to your own situation.

nohup ./logstash -f /usr/local/elk/logstash-6.3.1/config/flexcc-logstash.conf > /usr/local/elk/logstash/logs/flexcc-logstash.log 2>&1 &

continue to execute

sh flexcc-logstash.sh

3 Install Kibana and configure

3.1 Decompression

tar -zxvf kibana-6.3.1-linux-x86_64.tar.gz

3.2 Modify configuration

vim /usr/local/elk/kibana-6.3.1-linux-x86_64/config/kibana.yml
server.port: 5601       ##服务端口
server.host: "0.0.0.0"  ##服务器ip  本机
 
elasticsearch.url: "http://localhost:9200" ##elasticsearch服务地址 与elasticsearch对应

3.3 start kibana

/usr/local/elk/kibana-6.3.1-linux-x86_64/bin/kibana       #命令窗启动
或者
nohup ./kibana-6.3.1-linux-x86_64/bin/kibana &   #后台线程启动

3.4 close kibana

ctrl+c                                   #命令窗关闭
或者
netstat -tunlp|grep 5601                    #后台线程关闭   5601为kibana的启动端口
kill -9 pid                             ##pid 为查处线程的pid 

insert image description here

3.5 Verify kibana startup

http://localhost:5601/

4 Install Filebeat and configure

If it is multiple machines, Filebeat can be deployed repeatedly. But to modify the server and port of Logstash in the filebeat.yml file

4.1 Decompression

tar -zxvf filebeat -6.3.0-linux-x86_64.tar.gz

4.2 Edit filebeat.yml

cd /usr/local/elk/filebeat-6.3.1-linux-x86_64

vim filebeat.yml

Modify the following content:

paths: the path where the service log is located
hosts: [“localhost:18401”]: the server and port where Logstash is located

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /home/xiandai/callin/nohup.out
    - /home/xiandai/callout/nohup.out
    - /home/xiandai/sysmanager/nohup.out
    - /home/xiandai/report/nohup.out
    - /home/xiandai/knowledge/nohup.out
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']
  
  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[
  multiline.pattern: '^\{\"level\":'
  multiline.negate: true
  multiline.match: after
  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false




#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${
    
    path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false


#================================ Outputs =====================================

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["localhost:18401"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

4.3 Testing

Under the filebeat-6.3.1-linux-x86_64 path

./filebeat -e -c filebeat.yml -d "Publish"

If you can see a bunch of output, it means that logs are being sent to elasticsearch or logstash.
After the test is normal, Ctrl+C ends

4.4 start

nohup ./filebeat -e -c filebeat.yml >/dev/null 2>&1 &

Guess you like

Origin blog.csdn.net/weixin_38746118/article/details/118087579