ELK learning experiment 003: Elasticsearch cluster installation

As already described Elasticsearch this tool, and made a brief stand-alone installation, a elasticsearch cluster using three machines now do

A ready environment

1.1 Machine ready

1.2 Time Synchronization

[root@node* ~]# ntpdate ntp1.aliyun.com

23 Nov 20:45:52 ntpdate[16005]: adjust time server 120.25.115.20 offset -0.015719 sec

[root@node1 ~]# crontab -l

* * * * * /usr/sbin/ntpdate    ntp1.aliyun.com

1.3 Check other configuration

Such as whether the kernel, file connection configuration parameters, three nodes must be checked

[root@node2 ~]# sysctl -a|grep vm.max_map_count

vm.max_map_count = 655360

[root@node2 ~]# cat /etc/security/limits.conf

* Soft nofile   65536 
* hard nofile   131072 
* soft nproc    2048 
* hard nproc    4096

[root@node2 ~]# cat /etc/security/limits.d/20-nproc.conf

*          soft    nproc     4096
root       soft    nproc     unlimited

Download and unzip elasticsearch software according to the following articles directly into the configuration

Two each node configuration

2.1 The main configuration file

[root@node1 ~]# grep -Ev "^$|[#;]" /usr/local/elasticsearch/config/elasticsearch.yml

cluster.name: my-elktest-cluster
node.name: node-1
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.132.131","192.168.132.132","192.168.132.133"]
cluster.initial_master_nodes: ["node-1","node-2","node-3"]
http.cors.enabled: true
http.cors.allow-origin: "*"

[root@node2 ~]# grep -Ev "^$|[#;]" /usr/local/elasticsearch/config/elasticsearch.yml

cluster.name: my-elktest-cluster
node.name: node-2
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.132.131","192.168.132.132","192.168.132.133"]
cluster.initial_master_nodes: ["node-1","node-2","node-3"]
http.cors.enabled: true
http.cors.allow-origin: "*"

[root@node3 ~]# grep -Ev "^$|[#;]" /usr/local/elasticsearch/config/elasticsearch.yml

cluster.name: my-elktest-cluster
node.name: node-3
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.132.131","192.168.132.132","192.168.132.133"]
cluster.initial_master_nodes: ["node-1","node-2","node-3"]
http.cors.enabled: true
http.cors.allow-origin: "*"

[root@node* ~]# vi /etc/hosts

192.168 . 132.131    Node- 1 
192.168 . 132.132    Node- 2 
192.168 . 132.133    Node- 3

2.2 Start Service

[root@node1 ~]# systemctl restart elasticsearch

[root@node2 ~]# systemctl restart elasticsearch

[root@node3 ~]# systemctl restart elasticsearch

2.3 check the service status

[root@node* ~]# systemctl status elasticsearch

Three nodes are started, the state appears above, indicating that the service has been up three

2.4 elasticsearch-head inspection

 

Create a new index

 

 result

Three clusters simple test

3.1 slave node test

Turn off node node2

[root@node2 ~]# systemctl stop elasticsearch

Data is still open node2

[root@node2 ~]# systemctl start elasticsearch

Observed

Cluster return to normal conditions

3.2 Testing the master node goes down, the impact of cluster

Close the master node

Representative five-pointed star is the master node, the slave node dots represent

[root@node1 ~]# systemctl stop elasticsearch

Node node-1 can not be seen, master node has been transferred to node-2, but also data on the node-3 points and less node-2

Restore node-1

[root@node1 ~]# systemctl start  elasticsearch

集群回复正常

四 使用curl简单的查看集群信息

4.1 查看master节点

[root@node1 ~]# curl http://192.168.132.131:9200/_cat/master

9qVjdVSvSAGlZ7lpB9O78g 192.168.132.132 192.168.132.132 node-2

4.2 查看数据节点

[root@node1 ~]# curl -XGET http://127.0.0.1:9200/_cat/nodes?pretty

192.168.132.133 32 95 0 0.00 0.01 0.05 dilm - node-3
192.168.132.131 35 80 0 0.00 0.01 0.05 dilm - node-1
192.168.132.132 29 96 0 0.00 0.01 0.05 dilm * node-2

4.3 查看集群健康状态

[root@node1 ~]# curl localhost:9200/_cluster/health?pretty

{
  "cluster_name" : "my-elktest-cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 5,
  "active_shards" : 15,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

4.4其他的查看命令

[root@node1 ~]# curl localhost:9200/_cat

=^.^=
/_cat/allocation
/_cat/shards
/_cat/shards/{index}
/_cat/master
/_cat/nodes
/_cat/tasks
/_cat/indices
/_cat/indices/{index}
/_cat/segments
/_cat/segments/{index}
/_cat/count
/_cat/count/{index}
/_cat/recovery
/_cat/recovery/{index}
/_cat/health
/_cat/pending_tasks
/_cat/aliases
/_cat/aliases/{alias}
/_cat/thread_pool
/_cat/thread_pool/{thread_pools}
/_cat/plugins
/_cat/fielddata
/_cat/fielddata/{fields}
/_cat/nodeattrs
/_cat/repositories
/_cat/snapshots/{repository}
/_cat/templates

实验基本完成,后续在做关于集群的其他实验

Guess you like

Origin www.cnblogs.com/zyxnhr/p/11921675.html