三台阿里云服务器地址我:
101.100.0.1
101.100.0.2
101.100.0.3
1、这三台分别在同一个大区下的,不同可用区。我是通过docker 搭建,用的是 docker 的overlay网络,非host网络。所以,在这之前,需要先创建一个 overlay 网络。如下:
【docker系列】Docker Swarm 集群,跨主机网络通信环境搭建
2、需要在三台服务器上,分别启动 elasticsearch docker容器
----------------------------------安装 master_elasticsearch ---------------------------
### 启动简单版,然后 copy 文件到 宿主机,最后删除该容器
sudo docker run -tid \
--hostname=keda_elasticsearch \
--name=keda-elasticsearch \
-p 19200:9200 \
-p 19300:9300 \
-v /etc/localtime:/etc/localtime \
-e ES_JAVA_OPTS="-Xms4g -Xmx4g" \
-e TZ='Asia/Shanghai' \
-e LANG="en_US.UTF-8" \
elasticsearch:7.4.2
docker cp keda-elasticsearch:/usr/share/elasticsearch/config/ /usr/docker/software/elasticsearch/
docker cp keda-elasticsearch:/usr/share/elasticsearch/data/ /usr/docker/software/elasticsearch/
docker cp keda-elasticsearch:/usr/share/elasticsearch/logs/ /usr/docker/software/elasticsearch/
docker cp keda-elasticsearch:/usr/share/elasticsearch/plugins/ /usr/docker/software/elasticsearch/
docker stop keda-elasticsearch
docker rm keda-elasticsearch
## 如果需要修改配置文件,先修改。 在启动 正式版容器
sudo docker run -tid \
-m 4096M --memory-swap -1 \
--net cluster-overlay-software \
--ip 18.0.0.244 \
--restart=always \
--privileged=true \
--hostname=master_elasticsearch \
--name=keda6-master-elasticsearch \
-p 19200:9200 \
-p 19300:9300 \
-v /usr/docker/software/elasticsearch/config/:/usr/share/elasticsearch/config/ \
-v /usr/docker/software/elasticsearch/data/:/usr/share/elasticsearch/data/ \
-v /usr/docker/software/elasticsearch/logs/:/usr/share/elasticsearch/logs/ \
-v /usr/docker/software/elasticsearch/plugins/:/usr/share/elasticsearch/plugins/ \
-v /etc/localtime:/etc/localtime \
-e TZ='Asia/Shanghai' \
-e LANG="en_US.UTF-8" \
elasticsearch:7.4.2
----------------------------------安装 slave1_elasticsearch ---------------------------
### 启动简单版,然后 copy 文件到 宿主机,最后删除该容器
sudo docker run -tid \
--hostname=keda_elasticsearch \
--name=keda-elasticsearch \
-p 19200:9200 \
-p 19300:9300 \
-v /etc/localtime:/etc/localtime \
-e ES_JAVA_OPTS="-Xms4g -Xmx4g" \
-e TZ='Asia/Shanghai' \
-e LANG="en_US.UTF-8" \
elasticsearch:7.4.2
docker cp keda-elasticsearch:/usr/share/elasticsearch/config/ /usr/docker/software/elasticsearch/
docker cp keda-elasticsearch:/usr/share/elasticsearch/data/ /usr/docker/software/elasticsearch/
docker cp keda-elasticsearch:/usr/share/elasticsearch/logs/ /usr/docker/software/elasticsearch/
docker cp keda-elasticsearch:/usr/share/elasticsearch/plugins/ /usr/docker/software/elasticsearch/
docker stop keda-elasticsearch
docker rm keda-elasticsearch
## 如果需要修改配置文件,先修改。 在启动 正式版容器
sudo docker run -tid \
-m 4096M --memory-swap -1 \
--net cluster-overlay-software \
--ip 18.0.0.242 \
--restart=always \
--privileged=true \
--hostname=slave_elasticsearch \
--name=keda6-slave1-elasticsearch \
-p 19200:9200 \
-p 19300:9300 \
-v /usr/docker/software/elasticsearch/config/:/usr/share/elasticsearch/config/ \
-v /usr/docker/software/elasticsearch/data/:/usr/share/elasticsearch/data/ \
-v /usr/docker/software/elasticsearch/logs/:/usr/share/elasticsearch/logs/ \
-v /usr/docker/software/elasticsearch/plugins/:/usr/share/elasticsearch/plugins/ \
-v /etc/localtime:/etc/localtime \
-e TZ='Asia/Shanghai' \
-e LANG="en_US.UTF-8" \
elasticsearch:7.4.2
----------------------------------安装 slave2_elasticsearch ---------------------------
### 启动简单版,然后 copy 文件到 宿主机,最后删除该容器
sudo docker run -tid \
--hostname=keda_elasticsearch \
--name=keda-elasticsearch \
-p 19200:9200 \
-p 19300:9300 \
-v /etc/localtime:/etc/localtime \
-e ES_JAVA_OPTS="-Xms4g -Xmx4g" \
-e TZ='Asia/Shanghai' \
-e LANG="en_US.UTF-8" \
elasticsearch:7.4.2
docker cp keda-elasticsearch:/usr/share/elasticsearch/config/ /usr/docker/software/elasticsearch/
docker cp keda-elasticsearch:/usr/share/elasticsearch/data/ /usr/docker/software/elasticsearch/
docker cp keda-elasticsearch:/usr/share/elasticsearch/logs/ /usr/docker/software/elasticsearch/
docker cp keda-elasticsearch:/usr/share/elasticsearch/plugins/ /usr/docker/software/elasticsearch/
docker stop keda-elasticsearch
docker rm keda-elasticsearch
## 如果需要修改配置文件,先修改。 在启动 正式版容器
sudo docker run -tid \
-m 4096M --memory-swap -1 \
--net cluster-overlay-software \
--ip 18.0.0.243 \
--restart=always \
--privileged=true \
--hostname=slave_elasticsearch \
--name=keda6-slave2-elasticsearch \
-p 19200:9200 \
-p 19300:9300 \
-v /usr/docker/software/elasticsearch/config/:/usr/share/elasticsearch/config/ \
-v /usr/docker/software/elasticsearch/data/:/usr/share/elasticsearch/data/ \
-v /usr/docker/software/elasticsearch/logs/:/usr/share/elasticsearch/logs/ \
-v /usr/docker/software/elasticsearch/plugins/:/usr/share/elasticsearch/plugins/ \
-v /etc/localtime:/etc/localtime \
-e TZ='Asia/Shanghai' \
-e LANG="en_US.UTF-8" \
elasticsearch:7.4.2
3台都一样。ip 和 name 必须不一样, 一个网段集群中,不能有相同的 容器名和ip
3、修改配置文件
# 配置es的集群名称,默认是elasticsearch,es会自动发现在同一网段下的es,如果在同一网段下有多个集群,就可以用这个属性来区分不同的集群。
cluster.name: "docker-cluster"
# 节点名,默认随机指定一个name列表中名字,该列表在es的jar包中config文件夹里name.txt文件中,其中有很多作者添加的有趣名字。
node.name: "node-1"
# 指定该节点是否有资格被选举成为node,默认是true,es是默认集群中的第一台机器为master,如果这台机挂了就会重新选举master。
node.master: true
# 指定该节点是否存储索引数据,默认为true。
node.data: true
# 设置为true来锁住内存。因为当jvm开始swapping时es的效率会降低,所以要保证它不swap,可以把 ES_MIN_MEM和ES_MAX_MEM两个环境变量设置成同一个值,并且保证机器有足够的内存分配给es。同时也要允许elasticsearch的进程可以锁住内存,linux下可以通过ulimit -l unlimited命令。
bootstrap.memory_lock: false
# 这个参数是用来同时设置bind_host和publish_host上面两个参数。
network.host: 0.0.0.0
# 设置节点间交互的tcp端口,默认是9300。
transport.tcp.port: 9300
# 设置对外服务的http端口,默认为9200。
http.port: 9200
# 设置这个参数来保证集群中的节点可以知道其它N个有master资格的节点。默认为1,对于大的集群来说,可以设置大一点的值(2-4)
discovery.zen.minimum_master_nodes: 1
# 设置集群中master节点的初始列表,可以通过这些节点来自动发现新加入集群的节点。
discovery.zen.ping.unicast.hosts: ["keda6-master-elasticsearch","keda6-slave1-elasticsearch","keda6-slave2-elasticsearch"]
# 开启跨域,为了让es-head可以访问
#http.cors.enabled: true
#http.cors.allow-origin: "*"
cluster.initial_master_nodes: ["node-1"]
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: Authorization
xpack.security.enabled: false
xpack.security.transport.ssl.enabled: false