【ELK】ELK部署排坑记录

1.不使用root运行ELK,用到的目录均需授权

groupadd elasticsearch
useradd elasticsearch -g elasticsearch
chown -R elasticsearch:elasticsearch ELK_FOLDER

2.放开内存及文件句柄限制

echo "elasticsearch soft memlock unlimited" >> /etc/security/limits.conf
echo "elasticsearch hard memlock unlimited" >> /etc/security/limits.conf
echo "elasticsearch soft nofile 1000000" >> /etc/security/limits.conf
echo "elasticsearch hard nofile 1000000" >> /etc/security/limits.conf

3.ELK启动脚本

# Elasticsearch
ELK_FOLDER={%elk_folder%}
JDK_FOLDER={%jdk_folder%}
ulimit -l unlimited
sysctl -w vm.max_map_count=262144
su - elasticsearch -c "JAVA_HOME=$JDK_FOLDER;$ELK_FOLDER/elasticsearch/bin/elasticsearch -d"

# Logstash
ELK_FOLDER={%elk_folder%}
JDK_FOLDER={%jdk_folder%}
su - elasticsearch -c  "JAVA_HOME=$JDK_FOLDER;$ELK_FOLDER/logstash/bin/logstash -f $ELK_FOLDER/logstash/etc/logstash.conf &> /dev/null &"

# Kibana
ELK_FOLDER={%elk_folder%}
su - elasticsearch -c "$ELK_FOLDER/kibana/bin/kibana &>/dev/null &" &

# Cerebro
ELK_FOLDER={%elk_folder%}
JDK_FOLDER={%jdk_folder%}
rm $ELK_FOLDER/cerebro/RUNNING_PID 2> /dev/null
su - elasticsearch -c  "JAVA_HOME=$JDK_FOLDER;$ELK_FOLDER/cerebro-0.7.2/bin/cerebro -Dhttp.port=9001 &> /dev/null &" &

4.Elasticsearch清理脚本

# !/bin/bash

ES_IP=

for (( i=90; i<730; i++ ));do
    dayago=`date -d "-$i days" +%Y.%m.%d`
    curl -XDELETE $ES_IP:9200/*-$dayago &> /dev/null
done

5.Elasticsearch开启跨域

# elasticsearch.yml

http.cors.enabled: true
http.cors.allow-origin: /.*/
http.cors.allow-credentials: true

6.Elasticsearch JVM HEAP大小说明

# -Xms31g
# -Xmx31g
1.最大配置为可用内存的一半略低,留一半作Linux系统缓存。
2.大于32G时,会启用64位寻址,实际可用不如32G之内,除非机器配置在128G之上,其余情况均配置在32G之内

7.Logstash Batch大小说明

Pipeline worker大小默认为CPU核数

Batch大小为一次打包向Elasticsearch Bulk API发送的条数
默认Batch值偏小,大量的小Batch会影响Index速率,适当调高该值,并提高JVM内存,可加快Elasticsearch的Index速度。

Delay为发Batch延迟,即不到指定数量也进行发送的时间间隔,缩小可提高数据实时性。

8.Index template

{
  "order": 0,
  "template": "*",
  "settings": {
    "index": {
      "max_result_window": "10000000",
      "mapping": {
        "total_fields": {
          "limit": "10000"
        }
      },
      "refresh_interval": "1s",
      "number_of_shards": "5",
      "number_of_replicas": "0"
    }
  },
  "mappings": {
    "_default_": {
      "dynamic_templates": [
        {
          "message_field": {
            "mapping": {
              "fielddata": {
                "format": "true"
              },
              "index": "not_analyzed",
              "ignore_above": 256,
              "omit_norms": true,
              "type": "string"
            },
            "match_mapping_type": "string",
            "match": "*"
          }
        }
      ],
      "_all": {
        "omit_norms": true,
        "enabled": false
      },
      "properties": {
        "@timestamp": {
          "type": "date"
        },
        "logmsg": {
          "type": "text"
        }
      }
    }
  },
  "aliases": {}
"max_result_window": "10000000" #设置查询结果最大返回窗口大小
"refresh_interval": "1s", #Index刷新间隔
"number_of_shards": "5", #分片数量
"number_of_replicas": "0", #副本数量
"mapping": {
"total_fields": {
"limit": "10000" #最大Field数量限制
}
}
mapping里设置了所有字段不分词,以及忽略超过256个以上的字段。
proprety里设置了部分信息的类型。
关闭了__all__字段节约空间。
自带的分词多数情况下不适用于日志查询,于日志类型多使用wildcard查询。


猜你喜欢

转载自www.cnblogs.com/caizhifeng/p/10281508.html
elk
今日推荐