elasticsearch7.5.0 + kibana 7.5.0-+ cerebro-0.8.5 clustered production environment configuration and installation make the old and new clusters by elasticsearch-migration data migration tool

First, the server ready

There are two memory 128G servers, it is ready to start each of two instances es, plus a virtual machine, a total of five nodes, to ensure that two nodes down a data server is not affected.

Second, the system initialization

See my previous kafka system initialization: https://www.cnblogs.com/mkxfs/p/12030331.html

Third, the installation elasticsearch7.5.0

1. Because zookeeper and kafka need to start java

First install jdk1.8 environment

yum install java-1.8.0-openjdk-devel.x86_64 -y

2. official website to download es7.5.0

cd / opt

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.5.0-linux-x86_64.tar.gz

tar -zxf elasticsearch-7.5.0-linux-x86_64.tar.gz

mv elasticsearch-7.5.0 elasticsearch9300

Create a data directory es

mkdir -p /data/es9300

mkdir -p /data/es9301

3. Modify the profile es

vim /opt/elasticsearch9300/config/elasticsearch.yml 

Last added:

cluster.name: en-es
node.name: node-1
path.data: /data/es9300
path.logs: /opt/elasticsearch9300/logs
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
transport.tcp.port: 9300
cluster.routing.allocation.same_shard.host: true
cluster.initial_master_nodes: ["192.168.0.16:9300", "192.168.0.16:9301","192.168.0.17:9300", "192.168.0.17:9301","192.168.0.18:9300"]
discovery.zen.ping.unicast.hosts: ["192.168.0.16:9300", "192.168.0.16:9301","192.168.0.17:9300", "192.168.0.17:9301", "192.168.0.18:9300"]
discovery.zen.minimum_master_nodes: 3
node.max_local_storage_nodes: 2
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-credentials: true
http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type

4. Modify the size of the heap memory jvm

vim /opt/elasticsearch9300/config/jvm.options

-Xms25g
-Xmx25g

5. The machine deployment second node

cp -r /opt/elasticsearch9300 /opt/elasticsearch9301

vim /opt/elasticsearch9301/config/elasticsearch.yml

Last added:

cluster.name: en-es
node.name: node-2
path.data: /data/es9301
path.logs: /opt/elasticsearch9301/logs
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9201
transport.tcp.port: 9301
cluster.routing.allocation.same_shard.host: true
cluster.initial_master_nodes: ["192.168.0.16:9300", "192.168.0.16:9301","192.168.0.17:9300", "192.168.0.17:9301", "192.168.0.18:9300"]
discovery.zen.ping.unicast.hosts: ["192.168.0.16:9300", "192.168.0.16:9301","192.168.0.17:9300", "192.168.0.17:9301", "192.168.0.18:9300"]
discovery.zen.minimum_master_nodes: 3
node.max_local_storage_nodes: 2
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-credentials: true
http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type

6. Continue to install three other nodes, note that the port number

7. Start elasticsearch Service

Because elasticsearch does not allow root user to start, so add es account

groupadd es
useradd es -g es

Authorized es es directory to the user

chown -R is: is es9300

chown -R is: is es9301

chown -R is: is / data / es9300

chown -R is: is / data / es9301

Start es Services

su - es -c "/opt/elasticsearch9300/bin/elasticsearch -d"

su - es -c "/opt/elasticsearch9301/bin/elasticsearch -d"

Es log and view port, no error, you can successfully start.

Fourth, the installation kibana-7.5.0

1. official website to download kibana-7.5.0

cd / opt

wget https://artifacts.elastic.co/downloads/kibana/kibana-7.5.0-linux-x86_64.tar.gz

tar -zxf kibana-7.5.0-linux-x86_64.tar.gz

2. Modify kibana profile

vim /opt/kibana-7.5.0-linux-x86_64/config/kibana.yml

server.host: "192.168.0.16"

3. 启动 kibana

Add kibana log directory

mkdir /opt/kibana-7.5.0-linux-x86_64/logs

Users also need to start kibana es

Authorized kibana directory to es

chown -R is: is Kibana-7.5.0-linux-x86_64

start up:

su - es -c "nohup /opt/kibana-7.5.0-linux-x86_64/bin/kibana &>>/opt/kibana-7.5.0-linux-x86_64/logs/kibana.log &"

4. Access kibana browser

http://192.168.0.16:5601

 V. Installation es monitoring and management tools cerebro-0.8.5

1. Download the cerebro-0.8.5release version

cd / opt

wget https://github.com/lmenezes/cerebro/releases/download/v0.8.5/cerebro-0.8.5.tgz

cerebro-effective -zxf 0.8.5.tgz

2. Modify the cerebro configuration

vim /opt/cerebro-0.8.5/conf/application.conf

hosts = [
{
host = "http://192.168.0.16:9200"
name = "en-es"
headers-whitelist = [ "x-proxy-user", "x-proxy-roles", "X-Forwarded-For" ]
}

3. Start cerebro

nohup /opt/cerebro-0.8.5/bin/cerebro -Dhttp.port=9000 -Dhttp.address=192.168.0.16 &>/dev/null &

4. accessed through a browser

 

 

 六、通过elasticsearch-migration将老集群数据迁移到新集群上

1.在16上安装elasticsearch-migration

cd /opt

wget https://github.com/medcl/esm-v1/releases/download/v0.4.3/linux64.tar.gz

tar -zxf linux64.tar.gz

mv linux64 elasticsearch-migration

2.停止老集群所有写入操作,开始迁移

/opt/elasticsearch-migration/esm -s http://192.168.0.66:9200 -d http://192.168.0.16:9200 -x indexname -w=5 -b=10 -c 10000 >/dev/null

3.等待迁移完成,一小时大约迁移7000w文档,40G左右,同时最多建议迁移两个索引

Guess you like

Origin www.cnblogs.com/mkxfs/p/12072536.html