实现elk冷热日志分离

实现elk冷热日志分离

方法一:

修改elasticsearch.yml配置文件的信息如下所示:

Master-node1 节点文件信息如下所示:

[root@master-node1 ~]# cat /etc/elasticsearch/elasticsearch.yml | grep -v "#" | grep -v "^$"

cluster.name: "es"

node.name: "master-node1"

node.master: true

node.data: true

node.attr.box_type: "hot"

cluster.routing.allocation.node_initial_primaries_recoveries: 4

cluster.routing.allocation.node_concurrent_recoveries: 4

transport.tcp.compress: true

path.data: /var/lib/elasticsearch

path.logs: /var/log/elasticsearch

network.host: 0.0.0.0

http.port: 9200

discovery.zen.ping.unicast.hosts: ["192.168.101.17", "192.168.101.23","192.168.101.16"]

 

Master-node2 节点文件信息如下所示

[root@master-node2 elasticsearch]# cat elasticsearch.yml | grep -v "#" | grep -v "^$"

cluster.name: "es"

node.name: "master-node2"

node.master: true

node.data: true

node.attr.box_type: "cold"

cluster.routing.allocation.node_initial_primaries_recoveries: 4

cluster.routing.allocation.node_concurrent_recoveries: 4

transport.tcp.compress: true

path.data: /var/lib/elasticsearch

path.logs: /var/log/elasticsearch

network.host: 0.0.0.0

http.port: 9200

discovery.zen.ping.unicast.hosts: ["192.168.101.17", "192.168.101.23","192.168.101.16"]

 

Master-node3 节点文件信息如下所示

[root@master-node3 elasticsearch]# cat elasticsearch.yml | grep -v "#" | grep -v "^$"

cluster.name: "es"

node.name: "master-node3"

node.master: true

node.data: true

node.attr.box_type: "cold"

cluster.routing.allocation.node_initial_primaries_recoveries: 4

cluster.routing.allocation.node_concurrent_recoveries: 4

transport.tcp.compress: true

path.data: /var/lib/elasticsearch

path.logs: /var/log/elasticsearch

network.host: 0.0.0.0

http.port: 9200

discovery.zen.ping.unicast.hosts: ["192.168.101.17", "192.168.101.23","192.168.101.16"]

 

通过kibana的操作kibana的api实现数据的冷热分离

PUT _template/filebeat

{

  "index_patterns":"filebeat-6.0.0-2018.*",

  "settings": {

    "index.number_of_replicas":"3"

    "index.routing.allocation.require.box_type":"hot"

  }

}

PUT / filebeat-6.0.0-2018.12.09/_settings

{

  "settings":{

    "index.routing.allocation.require.box_type":"cold"

  }

}

通过如下指令可以看到日志数据已经迁移到master-node2及master-node3上面

curl '192.168.101.17:9200/_cat/shards?v'

filebeat-6.0.0-2018.12.09   1     p      STARTED    119  37.5kb 192.168.101.23 master-node2

filebeat-6.0.0-2018.12.09   1     r      STARTED    119  37.5kb 192.168.101.16 master-node3

filebeat-6.0.0-2018.12.09   2     r      STARTED    111  64.6kb 192.168.101.23 master-node2

filebeat-6.0.0-2018.12.09   2     p      STARTED    111  64.6kb 192.168.101.16 master-node3

filebeat-6.0.0-2018.12.09   0     r      STARTED    118  60.2kb 192.168.101.23 master-node2

filebeat-6.0.0-2018.12.09   0     p      STARTED    118  60.2kb 192.168.101.16 master-node3

 

可以编写脚本实现索引日志定时冷热分离,如果想实现定时冷热分离,需要执行一个定时任务。

 

#/bin/bash

time=`date -d "8 days ago" +%Y.%m.%d`

echo $time

curl -H "Content-Type:application/json" -XPUT http://192.168.101.17:9200/*-${time}/_settings?pretty -d'

{

  "index.routing.allocation.require.box_type": "cold"

}'

猜你喜欢

转载自blog.csdn.net/wzf862187413/article/details/87867866