ES elasticsearch cluster configuration

1 Environmental preparation

1.1 Server resources

这里我使用的是虚拟机环境

server IP address
node1 192.168.51.4
node2 192.168.51.5
node3 192.168.51.6

1.2 Install and configure es

For details, please refer to the blog post: https://blog.csdn.net/qq_15769939/article/details/114249211

If it is a virtual machine environment, one can be deployed directly and two can be cloned.

If it is a clone, you need to clear all the data in the customized es data directory. If you install it according to my tutorial, you need to clear it.

/usr/local/elasticsearch-7.4.2/data All files in the directory

2 Configure the cluster

2.1 node1

编辑配置文件

[root@localhost config]# vi /usr/local/elasticsearch-7.4.2/config/elasticsearch.yml

配置文件需修改的内容

# 配置集群名称,保证每个节点的名称相同,确保处于一个集群之内
cluster.name: auskat-es-cluster

# 节点名称,每个节点都不同
node.name: es-node1

# http端口 (默认端口)
http.port: 9200

# 主节点,作用主要是用来管理整个集群,负责创建或删除索引,管理其他非master节点(leader)
node.master: true

# 数据节点,用于对文档数据的增删改查
node.data: true

# 集群列表
discovery.seed_hosts: ["192.168.51.4","192.168.51.5","192.168.51.6"]


# 启动的时候使用一个master节点
cluster.initial_master_nodes: ["es-node1]

命令去除注释,查看配置文件信息

[root@localhost config]# more /usr/local/elasticsearch-7.4.2/config/elasticsearch.yml | grep ^[^#]

2.2 node2

编辑配置文件

[root@localhost config]# vi /usr/local/elasticsearch-7.4.2/config/elasticsearch.yml

配置文件需修改的内容

# 配置集群名称,保证每个节点的名称相同,确保处于一个集群之内
cluster.name: auskat-es-cluster

# 节点名称,每个节点都不同
node.name: es-node2

# http端口 (默认端口)
http.port: 9200

# 主节点,作用主要是用来管理整个集群,负责创建或删除索引,管理其他非master节点(leader)
node.master: true

# 数据节点,用于对文档数据的增删改查
node.data: true

# 集群列表
discovery.seed_hosts: ["192.168.51.4","192.168.51.5","192.168.51.6"]


# 启动的时候使用一个master节点
cluster.initial_master_nodes: ["es-node1]

命令去除注释,查看配置文件信息

[root@localhost config]# more /usr/local/elasticsearch-7.4.2/config/elasticsearch.yml | grep ^[^#]

2.3 node3

编辑配置文件

[root@localhost config]# vi /usr/local/elasticsearch-7.4.2/config/elasticsearch.yml

配置文件需修改的内容

# 配置集群名称,保证每个节点的名称相同,确保处于一个集群之内
cluster.name: auskat-es-cluster

# 节点名称,每个节点都不同
node.name: es-node3

# http端口 (默认端口)
http.port: 9200

# 主节点,作用主要是用来管理整个集群,负责创建或删除索引,管理其他非master节点(leader)
node.master: true

# 数据节点,用于对文档数据的增删改查
node.data: true

# 集群列表
discovery.seed_hosts: ["192.168.51.4","192.168.51.5","192.168.51.6"]


# 启动的时候使用一个master节点
cluster.initial_master_nodes: ["es-node1]

命令去除注释,查看配置文件信息

[root@localhost config]# more /usr/local/elasticsearch-7.4.2/config/elasticsearch.yml | grep ^[^#]

2.4 Start the service

分别启动三点节点的elasticsearch服务

[root@localhost config]# cd /usr/local/elasticsearch-7.4.2/bin
[root@localhost bin]# ./elasticsearch -d

3 Cluster split brain problem

3.1 Introduction

If there is a network interruption or server downtime, the cluster may be divided into two parts, each part has its own master to manage, this is split brain.

3.2 Solution

The master master node can only become a new master node after being jointly elected by multiple master nodes.

ES7 following solutions :

  • More than half of the nodes are elected in a unified manner before the nodes can become the new master
  • discovery.zen.minimun_master_nodes = (N/2) + 1
  • N is the number of nodes in the cluster master, that is, node.master=truethe total number of the server node disposed

ES 7.X

In the latest version of 7.X, ,minimun_master_nodesthis parameter has been removed, replaced by es itself to manage, so that the problem of split brain is avoided, and the election speed will be faster.

4 Related information

  • The blog post is not easy, everyone who has worked so hard to pay attention and praise, thank you

Guess you like

Origin blog.csdn.net/qq_15769939/article/details/114915207