Back-end development of Elasticsearch articles ---- es cluster

Clusters

Build clusters

Now we have es1, es2, es3 these three machines
were modified three machines elasticsearch.yml profile

# 配置集群名称,保证每个节点的集群名称相同,如此就能都处于一个集群之内了
cluster.name: es-cluster

# 每一个节点的名称,必须不一样
node.name: ${你定义的节点名称,eg:es-node1}

# http端口,一般使用默认就可以了
http.port: 9200

# 定义主节点功能,作用主要是用于管理整个集群,负责创建或者删除索引,管理其他非master节点
# 当这个节点定义了这个属性后,即使一开始这个节点不是主节点,当master宕机后,这个节点可以
# 参与master的选举
node.master: true

# 数据节点,用于对文档数据的增删改查
node.data: true

# 集群列表,假设es-node1是101,es-node2是102,es-node3是103
discovery.seed_hosts: ["192.168.0.101","192.168.0.102","192.168.0.103"]

# 启动的时候使用的一个master节点
cluster.initial_master_nodes: ["es-node1"]

Then start cutting machine elasticsearch 3 can, in fact, only the above configuration node.name this property is not the same, the other configurations are basically the same

Cluster split brain phenomenon

If a network outage or server goes down, then there will be a cluster may be divided into two parts, each with its own master to manage, this is the split brain

Split brain solutions
master master node to go through multiple open a joint election of the master node functions in order to become the new master node.
Solving principle: more than half of the nodes consent election, only to become the new master node

  • = discovery.zen.minimum_master_nodes (N / 2) + 1'd
    N is the number of nodes in the cluster of opened master function, that is, node.master: truethe number of server nodes that provided

But in es7.x in, minimum_master_node this parameter has been removed, and this piece of content to manage entirely by itself es, thus avoiding the problem of split brain, the election will be very fast.

es principle to read and write

Suppose we now have three nodes es1, es2, es3, there is a cluster index shop, there are three shard, each shard has a Replica
ES1: p1, P2
ES2: P0, r1
ES3: r0, r2

Write principle

When the client initiates a request to the cluster will be randomly selected three machines as a docking node (coordinating node coordinator node) It is assumed that selected es2, which shard above is calculated by the coordinating node should be written into the document (according to Some special algorithm), assuming this shard sent to p2, the coordinating node will jump to es3, the write in r1, and then jump to the copy of the written es3 p2 fragment r2 (synchronization data) Finally, then jump back to the coordinator node es2, a response is returned to the client end

Reading principle

And write the same principle, first select a coordinator node, determine the read data which shard belongs and can be read from this shard of primary fragmentation, can also be read from a copy of the fragment (if this is the main fragment, then the next is probably a copy of the fragment, and belongs to a load balancing), then there is coordination node jump to read the data of the fragment is located, regardless of the data obtained should first return the data to the coordinator node where taken up again when the client node in response to client-side coordinator

Published 118 original articles · won praise 16 · views 20000 +

Guess you like

Origin blog.csdn.net/weixin_39702831/article/details/105011671