ELK master-slave cluster deployment (there are only two sets in production...)

Overview

Machine IP Node name Serve
114 node-1 Data, master node (install elasticsearch, logstash, kabana)
115 node-2 Data node (install elasticsearch)

If logstash cannot collect all ES service logs, node-2 also needs to configure logstash.
There are many ways for logstash to collect all logs, such as message queues, etc.

Main idea: ElastiSearch is inherently distributed, and it knows how to improve scalability and availability by managing multiple nodes. This also means that your application does not need to pay attention to this issue.

A running Elasticsearch instance is called a node, and a cluster is composed of one or more nodes with the same cluster.name configuration. They share the pressure of data and load. When a node is added to or removed from the cluster, the cluster will redistribute all data evenly.

So the so-called cluster only needs to configure the ES cluster configuration, while logstash and kabana only set the configuration output, which is simple.

Here we only talk about the configuration of the cluster, which can be used regardless of whether it is docker (I use docker, pay attention to the development of 9200 and 9300 ports, 9300 is the default communication port of ES)

Configuration file

node-1 elasticsearch.yml file configuration

cluster.name: cluster-es
node.name: node-1
node.master: true
node.data: true
network.host: 0.0.0.0
network.publish_host: 114
discovery.seed_hosts: 
["114:9300","115:9300"]
cluster.initial_master_nodes: ["node-1","node-2"]
discovery.zen.minimum_master_nodes: 1
http.cors.enabled: true
http.cors.allow-origin: "*"

node-2 elasticsearch.yml file configuration

cluster.name: cluster-es
node.name: node-2
node.master: true
node.data: true
network.host: 0.0.0.0
network.publish_host: 115
discovery.seed_hosts: 
["114:9300","115:9300"]
cluster.initial_master_nodes: ["node-1","node-2"]
discovery.zen.minimum_master_nodes: 1
http.cors.enabled: true
http.cors.allow-origin: "*"

Note: If you use the previous es server, you need to delete the data folder, otherwise the cluster
node may not be found.

Parameter Description:

(1) cluster.name
cluster name, the cluster names of the three clusters must be the same
(2) node.name
node name, the three ES node names must be different
(3) discovery.zen.minimum_master_nodes: 1
means the one with the smallest cluster The number of masters. If the minimum master data of the cluster is less than the specified number, it will not
start. The official recommendation is to set the number of node masters to the number of clusters/2+1. (Since there are only two sets of production environments here, it is set to 1, and the others are can be calculated based on the number of nodes/2+1)
(4) node.master
Is the node eligible to be elected as master? If there are two mater_node 2 set up above, that is,
at least two master nodes, there must be two in the cluster The configuration of the es server is node.master:
true. If 2 nodes are configured, if the main server goes down, the entire cluster will be unavailable.
Therefore, for three servers, 3 node.masdter needs to be configured to true, so three master, if a master node is down
, it will elect a new master, and there are still two nodes available. As long as the number of ES servers configured with
node master as true and the number of running ES servers is not less than the
configured , then The entire cluster continues to be available. I configured three es node.master here, all of which are true, that is,
three masters. The master server mainly manages the cluster status and is responsible for metadata processing, such as retrieval.
By adding, deleting, shard allocation, etc., data storage and query will not go to the main node, the pressure is less, and the jvm memory
can be allocated lower
(5) node.data
stores index data, and all three can be set to true
( 6) bind_host
is used to connect all interfaces where elasticsearch listens
(7) publish_host
where elasticsearch communicates with other cluster components
(8) discovery.seed_hosts
IP configuration of the cluster node, the port is 9300.
(9) cluster.initial_master_nodes
sets a series of host names or IP addresses of nodes that meet the master node conditions to boot the cluster
(10) http.cors.enabled: true
supports cross-domain access switch
(11) http.cors.allow- origin: "*"
supports cross-domain access

Logstash's logstash.conf file configuration

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

input {
#  beats {
#    port => 5044
#  }
tcp {
#	 host => "127.0.0.1"
	 port => 5044
	 codec => json_lines
    }
}

output {
  elasticsearch {
    hosts => ["http://114:9200","http://115:9200"]
    index =>  "logstash-console-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
  }
  stdout{
    codec => rubydebug
  }
}

Because the input of logstash here can collect all ES services, so just configure the output outpu

kibana's kibana.yml file configuration

server.name: kibana
server.port: 5601
server.host: "0"

i18n.locale: "zh-CN"
# 对应的 elasticsearch服务所在 ip
elasticsearch.hosts: ["http://10.212.8.114:9200","http://10.212.8.115:9200"]
xpack.monitoring.ui.container.elasticsearch.enabled: true

Methods to verify whether the cluster is successful
Browser es cluster health status: http://114:9200/_cluster/health?pretty
Browser es cluster node status: http://114:9200/_nodes?pretty

Kibana has a view on the settings: you can visually see some specific informationInsert image description here

Guess you like

Origin blog.csdn.net/qq_44961149/article/details/120162510