How to install and deploy Elasticsearch?

This is the 14th day of my participation in the August Update Challenge. For details of the event, please check: August Update Challenge

text

Step 1: Create a normal user

Note: ES cannot be started by the root user, and must be installed and started by a normal user.

Here we use the hadoop user to install our es service

Step 2: Download and upload the compressed package, then unzip it

Download and upload the es installation package to the /opt/bigdata/soft node01 server of the node01 server. Use the es user to execute the following commands

[hadoop@node01 ~]$ cd /opt/bigdata/soft/
[hadoop@node01 soft]$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.7.0.tar.gz

[hadoop@node01 soft]$ tar -zxf elasticsearch-6.7.0.tar.gz  -C /opt/bigdata/install/
复制代码

Step 3: Modify the configuration file

修改 elasticsearch.yml

The node01 server uses the hadoop user to modify the configuration file

cd /opt/bigdata/install/elasticsearch-6.7.0/config/
mkdir -p /opt/bigdata/install/elasticsearch-6.7.0/logs/
mkdir -p /opt/bigdata/install/elasticsearch-6.7.0/datas
vim elasticsearch.yml
复制代码
cluster.name: myes
node.name: node01
path.data: /opt/bigdata/install/elasticsearch-6.7.0/datas
path.logs: /opt/bigdata/install/elasticsearch-6.7.0/logs
network.host: 192.168.52.100
http.port: 9200
discovery.zen.ping.unicast.hosts: ["node01", "node02", "node03"]
bootstrap.system_call_filter: false
bootstrap.memory_lock: false
http.cors.enabled: true
http.cors.allow-origin: "*"
复制代码

Modify jvm.option

Modify the jvm.option configuration file, adjust the jvm heap memory size node01 use the es user to execute the following commands to adjust the jvm heap memory size, each person adjusts according to the memory size of his own server

cd /opt/bigdata/install/elasticsearch-6.7.0/config
vim jvm.options

-Xms2g
-Xmx2g
复制代码

Step 4: Distribute the installation package to other servers

node01 uses es user to distribute the installation package to other servers

cd /opt/bigdata/install/
scp -r elasticsearch-6.7.0/ node02:$PWD
scp -r elasticsearch-6.7.0/ node03:$PWD
复制代码

The fifth step: node02 and node03 modify the es configuration file

Node02 and node03 also need to modify the es configuration file node02 uses the hadoop user to execute the following command to modify the es configuration file

cd /opt/bigdata/install/elasticsearch-6.7.0/config/
vim elasticsearch.yml
复制代码
cluster.name: myes
node.name: node02
path.data: /opt/bigdata/install/elasticsearch-6.7.0/datas
path.logs: /opt/bigdata/install/elasticsearch-6.7.0/logs
network.host: 192.168.52.110
http.port: 9200
discovery.zen.ping.unicast.hosts: ["node01", "node02", "node03"]
bootstrap.system_call_filter: false
bootstrap.memory_lock: false
http.cors.enabled: true
http.cors.allow-origin: "*"
复制代码

node03 uses hadoop

cd /opt/bigdata/install/elasticsearch-6.7.0/config/ 
vim elasticsearch.yml
复制代码
cluster.name: myes
node.name: node03
path.data: /opt/bigdata/install/elasticsearch-6.7.0/datas
path.logs: /opt/bigdata/install/elasticsearch-6.7.0/logs
network.host: 192.168.52.120
http.port: 9200
discovery.zen.ping.unicast.hosts: ["node01", "node02", "node03"]
bootstrap.system_call_filter: false
bootstrap.memory_lock: false
http.cors.enabled: true
http.cors.allow-origin: "*"
复制代码

Step 6: Modify the system configuration to solve the problem at startup

Because ordinary users are now used to install the es service, and the es service has more resource requirements on the server, including memory size, number of threads, etc. So we need to untie the constraints of resources for ordinary users

Solve the startup problem 1: The maximum number of open files for ordinary users is limited

Problem error message description:

max file descriptors [4096] for elasticsearch process likely too low, increase to at least [65536]
复制代码

ES needs to create a large number of index files and open a large number of system files, so we need to lift the limit on the maximum number of open files in the linux system, otherwise ES will throw an error on three machines. Use the es user to execute the following command to unlock the file Data Restrictions

sudo vi /etc/security/limits.conf

* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096
复制代码

Solve startup problem 2: limit the number of startup threads for ordinary users

Three machines execute the following commands to open the maximum number of files

sudo vi /etc/sysctl.conf

vm.max_map_count=655360
fs.file-max=655360
复制代码

Execute the following command to take effect

sudo sysctl -p
复制代码

Note: After the above two problems are modified, be sure to reconnect to Linux to take effect. Close the secureCRT or XShell tool, and then reopen the tool to connect to linux

After reconnecting, execute the following command, the result is ready to start ES

[hadoop@node01 ~]$ ulimit -Hn
131072
[hadoop@node01 ~]$ ulimit -Sn
65536
[hadoop@node01 ~]$ ulimit -Hu
4096
[hadoop@node01 ~]$ ulimit -Su
4096
复制代码

Step 7: Start the ES service

The three machines use the hadoop user to execute the following command to start the es service

nohup /opt/bigdata/install/elasticsearch-6.7.0/bin/elasticsearch 2>&1 &
复制代码

After the startup is successful, jsp can see the service process of es and access the page

http://node01:9200/?pretty

Can see some information after es is started

Note: If any machine service fails to start, then go to the path /opt/bigdata/install/elasticsearch-6.7.0/logs of which machine to view the error log

Guess you like

Origin juejin.im/post/6999272587222056997
Recommended