elasticsearch集群环境构建

前面已经简单介绍过单节点elasticsearch环境的构建,单节点的elasticsearch既充当master也充当data node。集群环境的节点可根据需要将不同节点设置成不同的服务功能。elasticsearch节点类型的配置在官网上有详细的说明(https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html)。下面我们解释下各类型节点的功能:

主节点:负责整个集群的管理,多个配置设置为主节点时,通过选举的方式产生主节点,若主节点环境出现问题时,通过重新选举产生新的主节点,默认配置每个节点都为主节点;

数据节点:负责数据存储,默认配置每个节点都为数据节点

提取节点:负责数据查询,默认配置每个节点都为提取节点

协调节点:在多个集群需要查询时,这个节点起到重要

默认情况下,节点是一个主符合条件的节点和一个数据节点,另外它还可以通过摄取管道预先处理文档。这对于小型集群来说非常方便,但是随着集群的增长,需要考虑将专用的主合格节点与专用数据节点,将它们的功能进行完全分离,非常有效提高集群的效率。

安装服务器列表:

83.3.211.72

83.3.211.74

83.3.211.66

83.3.211.67

83.3.211.68

83.3.211.69

在上面的6台服务器中,我将83.3.211.72及83.3.211.74同时设为主节点和数据节点,其他节点全部为数据节点。

83.3.211.72配置:

# Use a descriptive name for your cluster:
cluster.name: security
# Use a descriptive name for the node:
node.name: 83.3.211.72
# Add custom attributes to the node:
node.attr.rack: r1
# Path to directory where to store the data (separate multiple locations by comma):
path.data: /data1/elasticsearch/data
# Path to log files:
#path.logs: /path/to/logs
# Set the bind address to a specific IP (IPv4 or IPv6):
network.host: 0.0.0.0
# Set a custom port for HTTP:
http.port: 9200
# 设置节点间交互的tcp端口(集群),(默认9300)  
transport.tcp.port: 9300
#define node 1 as master-eligible:
node.master: true
node.data: false
#enter the private IP and port of your node:
#detail the private IPs of your nodes:
discovery.zen.ping.unicast.hosts: ["83.3.211.72","83.3.211.74"]
# Centos6不支持SecComp
bootstrap.system_call_filter: false
# 增加参数,使head插件可以访问es  
http.cors.enabled: true
http.cors.allow-origin: "*"

83.3.211.74配置:

# ---------------------------------- Cluster -----------------------------------
# Use a descriptive name for your cluster:
cluster.name: security
# ------------------------------------ Node ------------------------------------
# Use a descriptive name for the node:
node.name: 83.3.211.74
# Add custom attributes to the node:
node.attr.rack: r1
#
path.data: /data1/elasticsearch/data
# Path to log files:
#path.logs: /path/to/logs
network.host: 0.0.0.0
# Set a custom port for HTTP:
http.port: 9200
# 设置节点间交互的tcp端口(集群),(默认9300)  
transport.tcp.port: 9300
#define node 1 as master-eligible:
node.master: true
node.data: true
#enter the private IP and port of your node:
#detail the private IPs of your nodes:
discovery.zen.ping.unicast.hosts: ["83.3.211.72","83.3.211.74"]
# Centos6不支持SecComp
bootstrap.system_call_filter: false

83.3.211.66配置:(83.3.211.67、83.3.211.68、83.3.211.69作为数据节点,与此类同)

# ---------------------------------- Cluster -----------------------------------
# Use a descriptive name for your cluster:
cluster.name: security
# Use a descriptive name for the node:
node.name: 83.3.211.66
# Add custom attributes to the node:
node.attr.rack: r1
# ----------------------------------- Paths ------------------------------------
# Path to directory where to store the data (separate multiple locations by comma):
path.data: /data1/elasticsearch/data
# Path to log files:
#path.logs: /path/to/logs
network.host: 0.0.0.0
# Set a custom port for HTTP:
http.port: 9200
# 设置节点间交互的tcp端口(集群),(默认9300)  
transport.tcp.port: 9300
#define node 1 as master-eligible:
node.master: false
node.data: true
#enter the private IP and port of your node:
#detail the private IPs of your nodes:
discovery.zen.ping.unicast.hosts: ["83.3.211.72", "83.3.211.74"]

# Centos6不支持SecComp
bootstrap.system_call_filter: false

我们需要注意的是同一个集群中cluster.name必须设置完全相同,要不会被认为不同的集群。

接下来我们将每个节点都逐一进行启动,在单节点已提到如何启动elasticsearch服务(即在安装包的bin目录下,执行./elasticsearch -d)。

启动后我们能过ss -lnp | grep 9200查看我们的服务端口是否正常绑定。

我们再通过curl查询节点的状态:

[/home/linxiaojie]$ curl -X GET "183.3.211.72:9200/_cat/nodes?v"
ip            heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
58.251.124.70           39         100   7    2.99    2.73     2.72 mdi       *      83.3.211.72
83.3.211.67            45         100   7    2.27    2.07     2.01 di        -      83.3.211.67
83.3.211.74            43         100   7    3.30    2.82     2.56 mdi       -      83.3.211.74
83.3.211.66            42         100   7    2.69    3.24     2.80 di        -      83.3.211.66
83.3.211.68            45          98   6    2.61    2.40     2.32 di        -      83.3.211.68
58.251.124.67           49         100   7    3.10    2.51     2.48 di        -      83.3.211.69

到此我们看到我们的集群已全部启动并运行,状态亦是正常的,后面我们安装head插件,通过web端查看集群状态及数据存放等信息。

猜你喜欢

转载自blog.csdn.net/goodstudy168/article/details/81127853