Elasticsearch of the cluster, set up a local cluster

Outline

  In Elasticsearch, a node is es objects, and a cluster (Cluster) is constituted by one or more nodes, they have the same name of the cluster, mutually cooperating, the ability to share data and loads, if a new node is added or is deleted, the cluster will automatically sense and also to be able to balance data.

  If high availability and constructs an extended system, scalable manner: the longitudinal extension (buy a better machine), scale (buy more machines, recommended), so that if a single point can also be used to hang the other, it was confirmed the high availability features of the cluster.

Plus node cluster

  Broadcasting form (up ping) Features: Uncontrollable

    Copy elasticsearch local separate directory and then start the bin directory of the bat file

1 http://127.0.0.1:9200/_cluster/health

  As it can be seen the number of nodes is 2 and the state is green

    Unicast mode (over mode, high controllability)

    flow chart:

    

Select the master node:

  Changes in the cluster nodes, each node will negotiate who became the master node. The master node is provided in the configuration file

discovery.zen.minimum_master_nodes: 2

  If the total number of nodes is a node 3, a general configuration to prevent split-brain 3/2 + 1 

Prevent split brain

Split brain usually occurs in a heavy load, the other nodes in the cluster is the master node lost communication with said split brain, so you have to set the total number of nodes, for example, now the total number of nodes 5

Preventing split brain

1 discovery.zen.minimum_master_nodes: 3  # 3 = 5/2 +1

  As shown: the primary cluster node node1, since the network load and other problems, the original cluster is divided into two, respectively, by node1 / 2 and node3 / 4/5, but we set the parameter to the lowest node over 3 to form the new cluster, You can only node3 / 4/5 in order to form a new cluster, node1 / 2 can only resume after the network load, seek node3 / 4/5 to join the cluster, in which case we need to set node_master parameters, select the convenient new cluster The master node.

Misidentification

After the primary node is determined, ping internal mechanism to identify whether other nodes still alive, whether in health, we can also set parameters in the ping time

. 1 discovery.zen.fd.ping_interval:. 1   # Each node transmits a ping request 1s compartment 
2 discovery.zen.fd.ping_timeout: 30 # wait up to 30s 
3 discovery_zen.fd.ping_retries: 3     # try up to 3 times if no response this node is considered lost to

The total number of nodes to build local unicast cluster 3

  Configuring Unicast discovery

Configuration node 1

1 cluster.name: my_escluster   # Name of the cluster 
2 node.name: node1               # Node 1 
3 network.host: 127.0.0.1       # ip 
4 http.port: 9200                    # local listening port 9200 
5 transport.tcp.port: 9300       # cluster monitor port 9300     
6 discovery.zen.ping.unicast.hosts: [ " 127.0.0.1:9300 " , " 127.0.0.1:9302 " , " 127.0.0.1:9304 " ]     # allowed ip: port form a cluster

Node Configuration 2

. 1  cluster.name: my_escluster
 2  node.name: node2
 . 3 network.host: 127.0.0.1
 . 4 http.port,: 9202
 . 5 transport.tcp.port: 9302
 . 6 node.master: to true      # may be the master node privileges 
. 7 node.data : to true          # disk read and write 
8 discovery.zen.ping.unicast.hosts: [ " 127.0.0.1:9300 " , " 127.0.0.1:9302 " , " 127.0.0.1:9304 " ]

Configuring Node 3

1 cluster.name: my_escluster
2 node.name: node3
3 network.host: 127.0.0.1
4 http.port: 9204
5 transport.tcp.port: 9304
6 discovery.zen.ping.unicast.hosts: ["127.0.0.1:9300", "127.0.0.1:9302", "127.0.0.1:9304"]

Build success

  Can be found in the above figure, there are three nodes in this cluster, and in the green state, look at the node information, the node becomes the master node1, node1 because I first started, the total number of fragments 6

Cluster Revisited      

   When opening node1 a single point, and there is no data at this time index, name the cluster the cluster is empty

  Cluster Health Information inquiry

1 GET Cluster / Health   # query Dev Tools kibana in 
2 http://127.0.0.1:9200/_cluster/health?pretty     # browser input

  Returned the following results

 1 {
 2   "cluster_name" : "my_escluster",
 3   "status" : "green",
 4   "timed_out" : false,
 5   "number_of_nodes" : 1,
 6   "number_of_data_nodes" : 1,
 7   "active_primary_shards" : 3,
 8   "active_shards" : 3,
 9   "relocating_shards" : 0,
10   "initializing_shards" : 0,
11   "unassigned_shards" : 0,
12   "delayed_unassigned_shards" : 0,
13   "number_of_pending_tasks" : 0,
14   "number_of_in_flight_fetch" : 0,
15   "task_max_waiting_in_queue_millis" : 0,
16   "active_shards_percent_as_number" : 100.0
17

  We can see the cluster name, health status, timeouts, node, whether permissions and other information can be stored

health status

colour description
green All major fragmentation and fragmentation are available copy
yellow All major fragments are available, but not all copy fragments are available
red Not all of the major fragments are available

 

 

 

 

 

A slice level is the smallest unit of work, only a portion of a slice of all data stored in the index

Fragments divided into the main fragment and fragment replication master slice fixed magnitude, number of copies of the fragments can be adjusted

Replication fragment is a copy of the primary slice, to prevent data loss

  The Add Index

PUT blogs
{
  "settings": {
    "number_of_shards": 3,
    "number_of_replicas": 1
  }
}

  The allocation of three fragments, the default is 5, a copy fragments default (primary slice per slice has a copy)

  Health check

{
  "cluster_name" : "my_escluster",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 6,
  "active_shards" : 6,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 3,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 66.66666666666666
}

  The status of Yellow, normal primary slice, slice copy all available yet, copy fragment is unassigned state, not yet assigned a node. If you save on the same node, the node hang up, data will be lost.

Add more nodes risk of data loss, this time to copy three fragments have been allocated to ensure data integrity

(Sub-data stored in the main sheet, and then copied to the corresponding concurrent copy of fragmentation)

Until then view the state

{
  "cluster_name" : "my_escluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 6,
  "active_shards" : 12,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

Continues to expand

  We can now see more and more nodes to obtain the resources have carried out the allocation of resources to view the information from the command

1 GET _cluster / State / master_node, Nodes? Pretty     # returns all the nodes 
2 GET _cluster / State / master_node, the Node? Pretty      # returns information about the current primary node 
3 GET _nodes   # returns a list of all nodes

  Hang the case of the master node

  At this time, the node becomes the master node node2

1 node.master: false      # whether the node can be a node-based elections, the default is true 
2 node.data: true         # if the node has store permissions, the default is true

  A node that can command

PUT /_cluster/settings
{
  "transient": {
    "cluster.routing.allocation.exclude._ip": "192.168.1.1"
  }
}

  If the whole portion of the sheet is deactivated on the transfer node to other nodes, and this setting is temporary, no longer valid after the cluster is restarted.

 

Guess you like

Origin www.cnblogs.com/Alexephor/p/11411369.html