Build an Elasticsearch cluster in the CentOS7 environment

1. Environmental preparation

  In order to build an Elasticsearch cluster, we have prepared three virtual machine environments with IP addresses: 192.168.1.8, 192.168.1.9, and 192.168.1.10. And the JDK environment, firewall configuration, etc. have been prepared. For details, please refer to "How to quickly build a simple ELK log analysis system" . In this blog post, we have successfully built a stand-alone Elasticsearch environment.

2. Cluster construction

  In fact, based on the stand-alone ES environment, transforming it into an ES cluster environment requires only a small amount of configuration. Start from downloading ES here, and sort it out again.

  Because in the process of configuring the cluster, the three machines must run the ES service, and many of the configurations are the same. We choose to configure it on the server 192.168.1.10, and then serve the other two servers through scp Then make personalized modifications, and finally realize the establishment of a cluster environment.

2.1, download

  First, download ES on the machine 192.168.1.10. ES download page: https://www.elastic.co/cn/downloads/elasticsearch . I directly chose the 6.3.1 version here, downloaded through wget, the command is as follows:

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.3.1.tar.gz
2.2, unzip

  Unzip it to the current directory.

tar -zvxf elasticsearch-6.3.1.tar.gz
2.3, modify the configuration in elasticsearch.yml

  Modify the configuration file in elasticsearch.yml in the elasticsearch-6.3.1/config directory.

vim elasticsearch.yml

  The revised content is as follows:

#集群名称,集群中节点的集群名称要一致
cluster.name: escluster
#节点名称,仅仅是描述名称,用于在日志中区分,不能重复
node.name: 192.168.1.10
#配置data和logs的目录,我这里放到了根目录下的data和logs中
path.data: /usr/local/soft/elasticsearch-6.3.1/data
path.logs: /usr/local/soft/elasticsearch-6.3.1/logs

#当前节点的IP地址
network.host: 192.168.1.10
#对外提供服务的端口
http.port: 9200

#为了避免脑裂,集群节点数最少为 半数+1
discovery.zen.minimum_master_nodes: 2
#自动发现ping其他节点超时时间
discovery.zen.ping_timeout: 3s
#集群各节点IP地址,其中9300为集群服务的默认端口
discovery.zen.ping.unicast.hosts: ["192.168.1.8:9300","192.168.1.9:9300","192.168.1.10:9300"]

2.4, copy ES root directory to other servers

  Use the scp command to copy elasticsearch-6.3.1 to the other two nodes.

scp -r elasticsearch-6.3.1 node02:$PWD
scp -r elasticsearch-6.3.1 node01:$PWD
2.5. Modify the personalized configuration of the other two nodes

  In the elasticsearch.yml configuration file, in addition to the general configuration, there are also personalized configurations that need to be modified, such as node.name, network.host, etc. Of course, other configurations can also be configured differently, but for convenience, generally ensure The directory of each node on the cluster is the same.

2.6, add es user

  Because elasticsearch cannot be started with the root user, users are created and authorized for the three servers.

adduser es
chown -R es:es /usr/local/soft/elasticsearch-6.3.1

  The above commands can be executed on three servers simultaneously through the command window.
Insert picture description here

2.7, modify the system configuration file

  When building ELK earlier, we knew that some system configurations need to be modified so that Elasticsearch can start normally. You can refer to "Common problems and solutions in building an ELK log analysis system" , and we will modify the configuration directly here.

  First, modify the /etc/sysctl.conf file and add the following configuration:

#限制一个进程拥有虚拟内存区域的大小
vm.max_map_count=262144

  Then execute the following command to make the configuration take effect.

#配置生效
sysctl -p

  Again, modify the vim /etc/security/limits.conf configuration

* soft nofile 65536
* hard nofile 65536
* soft nproc 4096
* hard nproc 4096

  among them,

  • nofile represents the largest open file descriptor
  • nproc represents the maximum number of user processes
2.8, start

 &esmp; Start three ES nodes separately. If only one of the nodes is started, it will wait until the other nodes are started at this time, and eventually fail.

#在elasticsearch根目录下执行
./bin/elasticsearch
2.9, verification

  After the startup is successful, visit any node http://192.168.1.10:9200/, the following content will appear, indicating that the startup is successful.

Insert picture description here
If you visit http://192.168.1.10:9200/_cat/health?v, you can view the status of the cluster, as shown below:
Insert picture description here

  among them,

  • Cluster status (status): red means the cluster is unavailable and faulty. Yellow indicates that the cluster is unreliable but available, which is generally the state when a single node is used. The green state indicates that everything in the cluster is normal.

  • cluster, the cluster name

  • status, the cluster status green represents health; yellow represents all primary shards are allocated, but at least one copy is missing, and the cluster data is still intact at this time; red represents that some primary shards are unavailable and data may have been lost.

  • node.total, the number of online nodes

  • node.data, the number of online data nodes, the number of nodes storing data,

  • shards, number of shards, active_shards number of surviving shards

  • pri, the number of primary shards, active_primary_shards The number of surviving primary shards Normally the number of shards is twice that of pri.

  • relo, relocating_shards The number of shards in the migration, the normal situation is 0

  • init, initializing_shards The number of shards in initializing is normally 0

  • unassign, unassigned_shards Unassigned shards are normally 0

  • pending_tasks, tasks in preparation, tasks refer to normal conditions such as migrating fragments, etc., 0

  • max_task_wait_time, the longest waiting time of a task

  • active_shards_percent, the percentage of active shards, normal shards percentage is 100% under normal circumstances

3. Installation and use of Elasticsearch-head plugin

  Please refer to "Installation and Use of Elasticsearch-head Plugin"

Guess you like

Origin blog.csdn.net/hou_ge/article/details/110529317