ELKBR deployment test items Log

ELK

  • filebeat: with a log collection function, compared logstash, + filebeat lighter, less resource for clients.
  • redis message queue selection : Redis NoSQL database servers are usually used, but redis here just to make the message queue, massive log recommended kafka.
  • logstash: mainly used to collect log analysis, log filtering tools to support large amounts of data acquisition mode. General work of c / s architecture, client installed on the host side need to collect logs, server side is responsible for each node the received log is filtered, modification and other operations in a concurrent to elasticsearch up.
  • elasticsearch: Elasticsearch is an open source distributed search engine that provides collection, analysis, storage of data three functions. Its features include: distributed, zero-configuration, auto-discovery, auto-slice index, index replication mechanism, restful style interfaces, multiple data sources, such as automatic load search.
  • kibana: Kibana can analyze friendly Web interface and log Logstash ElasticSearch provided to help summarize, analyze and search for important data logs.
CPU name Configuration Character Software version
filebeat no The client log collection https://www.elastic.co/cn/downloads/beats/filebeat
repeat 2 vCPU 8 GiB message queue redis-stable.tar.gz
logstash 4 vCPU 16 GiB Log collection https://www.elastic.co/cn/downloads/logstash
elasticsearch 4 vCPU 16 GiB Log Search Engine https://www.elastic.co/cn/downloads/elasticsearch
kibana 2 vCPU 4 GiB Log data show https://www.elastic.co/cn/downloads/kibana

ES activate Platinum Edition: https://www.jianshu.com/p/1ff67bb363dd

1 redis deployment

  • Upload deployment package to / usr / local / src / redis /, dependent on installation environment and configuration

  • mkdir -p /usr/local/src/redis/
    yum -y install gcc-c++ make tcl
    service iptables stop && chkconfig iptables off
    echo 511 > /proc/sys/net/core/somaxconn
    echo never > /sys/kernel/mm/transparent_hugepage/enabled
    echo "echo 511 > /proc/sys/net/core/somaxconn" >> /etc/rc.local
    echo "echo never > /sys/kernel/mm/transparent_hugepage/enabled" >> /etc/rc.local
    echo "vm.overcommit_memory = 1" >> /etc/sysctl.conf && modprobe bridge && sysctl -p
  • Compile and install redis server

  • mkdir -p /usr/local/src/redis/{data,log}
    cd /usr/local/src/redis && tar -zxvf redis-stable.tar.gz -C . && cd redis-stable && make && make install
    (此处可一路回车选择默认配置,配置文件在cat /etc/redis/6379.conf)
    ./utils/install_server.sh
  • Start redis

  • chkconfig redis_6379 on
    service redis_6379 start
  • Check start after service is normal

  • redis-cli -h 本机IP

    localhost:6379> keys *
    (empty list or set)
    localhost:6379> set test abc
    OK
    localhost:6379> get test
    "abc"
    localhost:6379> del test
    (integer) 1

    localhost:6379> quit

2 filebeat

  • Upload filebeat, extract modify the configuration

  • mkdir /usr/local/src/filebeat
    #上传压缩包
    tar -zxvf filebeat-7.3.1-linux-x86_64.tar.gz
    cd filebeat-7.3.1-linux-x86_64
    vim filebeat.yml

(Paths: a position corresponding to the log)

enabled: true

paths:

  • /tomcat/apache-tomcat-7.0.72/logs/*.out

multiline.pattern: ^[

multiline.negate: false

multiline.match: after

(Tags: the variable TEMPLATE_TAG for deployment to replace the corresponding quantities of the corresponding server instance name)

tags: ["192.168.192.10"]

Commented Elasticsearch related configuration

----------------------------- Redis output --------------------------------

(After redis key variable to save the keys, for deployment to replace the corresponding quantities corresponding application server name)

output.redis:
hosts: ["localhost:6379"]
db: 0
timeout: 5
key: "project-name"

================================ Global =====================================

filebeat.global:
filebeat.spool_size: 64
filebeat.idle_timeout: 5s

  • start up

  • ./filebeat -e -c filebeat.yml
  • See if logging has been redis

  • redis-cli -h 本机IP
    #查看所有KEY
    keys *
    #查看日志长度
    LLEN project-name
    #查看某一行日志
    INDEX project-name 1
    

3 ElasticSearch

  • Download extract

  • mkdir /usr/local/src/elasticsearch
    #上传压缩包
    tar -zxvf elasticsearch-7.3.1-linux-x86_64.tar.gz
  • Creating data and log files in the folder es, modify the configuration file

  • #对应配置里的数据和日志文件目录
    mkdir /usr/local/src/elasticsearch/elasticsearch-7.3.1/data
    #mkdir /usr/local/src/elasticsearch/elasticsearch-7.3.1/logs
    vim config/elasticsearch.yml

    cluster.name: my-application

    node.name: node-10

    path.data: /usr/local/src/elasticsearch/elasticsearch-7.3.1/data

    path.logs: /usr/local/src/elasticsearch/elasticsearch-7.3.1/logs

    network.host: 本机IP
    http.port: 9200
    discovery.seed_hosts: ["本机IP"]
    cluster.initial_master_nodes: ["node-10"]

    Because Centos6 does not support the addition of the following two SecComp

    bootstrap.memory_lock: false
    bootstrap.system_call_filter: false

    Verification turned on (do not want to ignore password)

    xpack.security.enabled: true
    xpack.ml.enabled: true
    xpack.license.self_generated.type: trial

  • New elkuser users (be careful not to use the root user to start es)

  • #创建用户
    useradd elkuser
    #修改用户组和用户
    cd /usr/local/src/elasticsearch
    chown -R elkuser:elkuser elasticsearch-7.3.1
    #切换用户
    su elkuser
  • start up

  • ./bin/elasticsearch
  • Set startup password

  • ./bin/elasticsearch-setup-passwords interactive --verbose
  • Trying user password change call http://192.168.192.10:9200/_security/user/apm_system/_password?pretty
    { }

    Changed password for user [apm_system]

    Trying user password change call http://192.168.192.10:9200/_security/user/kibana/_password?pretty
    { }

    Changed password for user [kibana]

    Trying user password change call http://192.168.192.10:9200/_security/user/logstash_system/_password?pretty
    { }

    Changed password for user [logstash_system]

    Trying user password change call http://192.168.192.10:9200/_security/user/beats_system/_password?pretty
    { }

    Changed password for user [beats_system]

    Trying user password change call http://192.168.192.10:9200/_security/user/remote_monitoring_user/_password?pretty
    { }

    Changed password for user [remote_monitoring_user]

    Trying user password change call http://192.168.192.10:9200/_security/user/elastic/_password?pretty
    { }

    Changed password for user [elastic]

ElasticSearch startup problems:

[1]: max file descriptors [ 4096] for elasticsearch process is too low, increase to at least [65536]
switch to root, add the following edit limits.conf (need to restart force)
vi /etc/security/limits.conf
* soft nofile 65536
* hard nofile 65536

[2]: max number of threads [3818] for user [es] is too low, increase to at least [4096]
the maximum number of threads is too low. Modify the configuration file etc / security / limits.conf, increase the allocation
* soft nproc 4096
* hard nproc 4096

[3]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
modify /etc/sysctl.conf file, increase the allocation vm.max_map_count = 262144
vi /etc/sysctl.conf
sysctl -p
come into force command sysctl -p

Cause of the problem: because Centos6 does not support SecComp, and ES5.2.1 default bootstrap.system_call_filter detection to true, resulting in detection failure led directly to the ES can not be started after the failure
Solution: Configure elasticsearch.yml in bootstrap.system_call_filter is false, attention to Memory below:
bootstrap.memory_lock: false
bootstrap.system_call_filter: false

4 Logstash

  • Download extract

  • mkdir /usr/local/src/logstash
    #上传压缩包
    tar -zxvf logstash-7.3.1.tar.gz
  • Create a profile

  • vim logstash.conf

    input {

    ​ redis {
    ​ host => "redis所在机器"
    ​ data_type => "list"
    ​ port => "6379"
    ​ key => "project-name"
    ​ type => "project-name"
    ​ }

    }

    output {

    ​ if[type == "project-name"]{

    {elasticsearch
    the hosts => "machine where es: 9200"
    User => "Elastic"
    password => "environment setting password just es"
    CODEC => "JSON"
    index => "Project-name-% YYYY.MM.DD} + { "
    }

    ​ }

    }

  • start up

  • bin/logstash -f logstash.conf

5 kibana

  • Download extract

  • mkdir /usr/local/src/kibana
    #上传压缩包
    tar -zxvf kibana-7.3.1-linux-x86_64.tar.gz
    
    cd kibana-7.3.1-linux-x86_64
    mkdir logs
    #进入kibana配置下
    cd config
    #修改配置文件
    vim kibana.yml 

server.port: 5601
server.host: "Native IP"
elasticsearch.hosts: [ " HTTP: // ES server IP: 9200 "]
kibana.index: ".kibana"
elasticsearch.username: "kibana"
elasticsearch.password: "es environment setting kibana password"
pid.file: /var/run/kibana.pid
logging.dest: /usr/local/src/kibana/kibana-7.3.1-linux-x86_64/logs/kibana.log

  • Start (kibana not recommended to start as the root user, if you start with a root, need to add --allow-root)

  • bin/kibana 
    #bin/kibana  --allow-root
    (观察配置的日志)
  • 5601 access port (using User es) (The following is not updated, use and reference)

  • Create a log index

  • Check the amount of log information

Guess you like

Origin www.cnblogs.com/ttzzyy/p/11529124.html