ES restart after being given a three-node no known master node

problem

ES has been monitored in the study of how to do, steal points lazy, do not get then calculated by the API, I wanted to find a ready-made plugins or monitoring software, as long as you can put on a agent, and then found the x-pack, plug-in installed well, after the need to restart the cluster ES, ES cluster line I think since it is one of a cluster restart should not be a problem, a bit overestimating, after a restart, the whole cluster hung up .... ..
 

During operation

1, the system
[centos@ip-172-0-0-233 bin]$ cat /etc/redhat-release 
CentOS Linux release 7.6.1810 (Core) 

 

2, ES version
[centos@ip-172-0-0-233 bin]$ ./elasticsearch --version
Version: 5.0.2, Build: f6b4951/2016-11-24T10:07:18.101Z, JVM: 1.8.0_131

 

3, Kill process

ps -ef | grep pid
kill -9 pid

In this way the operation is complete regretted not killing each service are so I do not know this operation on the cluster hanging there a certain impact.

 

4, the error message

[2019-10-17T08:43:39,084][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [lang-painless]
[2019-10-17T08:43:39,084][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [percolator]
[2019-10-17T08:43:39,084][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [reindex]
[2019-10-17T08:43:39,084][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [transport-netty3]
[2019-10-17T08:43:39,084][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [transport-netty4]
[2019-10-17T08:43:39,084][INFO ][o.e.p.PluginsService     ] [node-1] no plugins loaded
[ 2019 - 10 -17T08: 43 : 41 , 612 ] [INFO] [OenNode] [node- 1 ] a initialized
[ 2019 - 10 -17T08: 43 : 41 , 613 ] [INFO] [OenNode] [Node- 1 ] Starting ...
[2019-10-17T08:43:41,812][INFO ][o.e.t.TransportService   ] [node-1] publish_address {172.0.0.16:9300}, bound_addresses {172.30.36.146:9300}
[2019-10-17T08:43:41,817][INFO ][o.e.b.BootstrapCheck     ] [node-1] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks

[2019-10-17T08:44:11,833][WARN ][o.e.n.Node               ] [node-1] timed out while waiting for initial discovery state - timeout: 30s
[2019-10-17T08:44:11,839][INFO ][o.e.h.HttpServer         ] [node-1] publish_address {172.0.0.16:9200}, bound_addresses {172.30.36.146:9200}
[ 2019 - 10 -17T08: 44 : 11 , 839 ] [INFO] [OenNode] [node- 1 ] started The
[2019-10-17T08:44:12,001][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,001][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,003][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,010][DEBUG][o.e.a.a.c.s.TransportClusterStateAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,010][DEBUG][o.e.a.a.c.s.TransportClusterStateAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,228][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,758][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,759][DEBUG][o.e.a.a.c.s.TransportClusterStateAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,760][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,814][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,814][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,815][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,815][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,817][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,817][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,817][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,820][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,820][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,821][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,822][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,822][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,823][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,824][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,826][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,827][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,827][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,828][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,828][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,830][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,830][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:42,012][DEBUG][o.e.a.a.c.s.TransportClusterStateAction] [node-1] timed out while retrying [cluster:monitor/state] after failure (timeout [30s])
[2019-10-17T08:44:42,012][DEBUG][o.e.a.a.c.s.TransportClusterStateAction] [node-1] timed out while retrying [cluster:monitor/state] after failure (timeout [30s])
[2019-10-17T08:44:42,013][WARN ][r.suppressed             ] path: /_cluster/state/metadata, params: {metric=metadata}
org.elasticsearch.discovery.MasterNotDiscoveredException
    at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$5.onTimeout(TransportMasterNodeAction.java:214) [elasticsearch-5.0.2.jar:5.0.2]
    at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:350) [elasticsearch-5.0.2.jar:5.0.2]
    at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:240) [elasticsearch-5.0.2.jar:5.0.2]
    at org.elasticsearch.cluster.service.ClusterService$NotifyTimeout.run(ClusterService.java:957) [elasticsearch-5.0.2.jar:5.0.2]
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:458) [elasticsearch-5.0.2.jar:5.0.2]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
[2019-10-17T08:44:42,013][WARN ][r.suppressed             ] path: /_cluster/state/metadata, params: {metric=metadata}
org.elasticsearch.discovery.MasterNotDiscoveredException
    at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$5.onTimeout(TransportMasterNodeAction.java:214) [elasticsearch-5.0.2.jar:5.0.2]
    at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:350) [elasticsearch-5.0.2.jar:5.0.2]
    at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:240) [elasticsearch-5.0.2.jar:5.0.2]
    at org.elasticsearch.cluster.service.ClusterService$NotifyTimeout.run(ClusterService.java:957) [elasticsearch-5.0.2.jar:5.0.2]
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:458) [elasticsearch-5.0.2.jar:5.0.2]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
[2019-10-17T08:44:42,760][DEBUG][o.e.a.a.c.s.TransportClusterStateAction] [node-1] timed out while retrying [cluster:monitor/state] after failure (timeout [30s])
[2019-10-17T08:44:42,761][WARN ][r.suppressed             ] path: /_cluster/state/metadata, params: {metric=metadata}
org.elasticsearch.discovery.MasterNotDiscoveredException
    at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$5.onTimeout(TransportMasterNodeAction.java:214) [elasticsearch-5.0.2.jar:5.0.2]
    at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:350) [elasticsearch-5.0.2.jar:5.0.2]
    at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:240) [elasticsearch-5.0.2.jar:5.0.2]
    at org.elasticsearch.cluster.service.ClusterService$NotifyTimeout.run(ClusterService.java:957) [elasticsearch-5.0.2.jar:5.0.2]
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:458) [elasticsearch-5.0.2.jar:5.0.2]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
 
 
5, the configuration file
cluster.name: lile
node.name: node-1
bootstrap.memory_lock: true
network.host: 172.0.0.16
http.port: 9200
discovery.zen.ping.unicast.hosts: ["172.0.0.16","172.0.0.17","172.0.0.18"]
discovery.zen.minimum_master_nodes: 2
http.cors.enabled: true 
http.cors.allow-origin: "*"
path.data: /data/elasticsearch/data
path.logs: /data/elasticsearch/logs

 

Third, the solution

Various restart are not, found on the Internet, are enough to restart, but hard reboot is not much better. But when discovery.zen.minimum_master_nodes this value is set to 1, it can be started successfully, but the three have become the master. Later see a this parameter, then add all the restart just fine.
 
 
discovery.zen.ping_timeout: 60s

 

Fourth, analyze the reasons

Yet a closer look, the feeling is the time to find a cluster of one another too short, we did not find each other, because the two have to form clusters
 

Guess you like

Origin www.cnblogs.com/lemon-le/p/11707138.html