Split [version] Docker-compose build Elasticsearch 7.1.0 Cluster

Written on the front

Engage in two or three days, there has been a problem bothering me, how can ES cluster configured correctly mapped to the host, here are often reported ClusterFormationFailureHelper master not discovered or elected yet.due to ES node container is not properly mapped to the host, and the container ip is variable, how can I configure it?

Temporary work, and finally thought of a way, fixed container ip-- use network_mode: host

I see the host mode shines, the equivalent of a container hosting services, which account for you which is the port, no need to specify your own port to be mapped to the host. So long as the host ip unchanged, container ip what is not no problem! ! !

In this article Chart

Explanation:

Master node as the Master node and coordinator node, in order to prevent split brain problems, reduce the load, the data does not exist

Node1 ~ Node3 data node, does not participate in the election Master

TribeNode node data does not exist, does not participate in the election Master

Preparing the environment

  • GNU/Debain Stretch 9.9 linux-4.19
  • Docker 18.09.6
  • Docker-Compose 1.17.1
  • elasticsearch:7.1.0

See my configuration script Github < https://github.com/hellxz/docker-es-cluster.git >

Directory Structure

.
├── docker-es-data01
│   ├── data01
│   ├── data01-logs
│   ├── docker-compose.yml
│   ├── .env
│   └── es-config
│       └── elasticsearch.yml
├── docker-es-data02
│   ├── data02
│   ├── data02-logs
│   ├── docker-compose.yml
│   ├── .env
│   └── es-config
│       └── elasticsearch.yml
├── docker-es-data03
│   ├── data03
│   ├── data03-logs
│   ├── docker-compose.yml
│   ├── .env
│   └── es-config
│       └── elasticsearch.yml
├── docker-es-master
│   ├── docker-compose.yml
│   ├── .env
│   ├── es-config
│   │   └── elasticsearch.yml
│   ├── master-data
│   └── master-logs
└── docker-es-tribe
    ├── docker-compose.yml
    ├── .env
    ├── es-config
    │   └── elasticsearch.yml
    ├── tribe-data
    └── tribe-logs

Each directory represents a node and port numbers

Node directory Node Name Coordination port number Explanation Query port number
docker-es-data01 Date01 9301 1 data node, the non-master node 9201
docker-es-data02 Date02 9302 Data node 2, the non-master node 9202
docker-es-data03 Date03 9303 Data node 3, the non-master node 9203
docker-es-master master 9300 master node, a node non-data 9200
docker-es-tribe tribe 9304 The coordinating node, non-non-data master node 9204

Would like to test these nodes is available, only need to modify each node under the directory es-config/elasticsearch.yml, ip address, ip can be replaced all you need.

Each function illustrated document

In view of this side there are many repetitive operations, just to take the example where the master node, the remaining codes see Github

.env This file provides default parameters for the docker-compose.yml, easy to modify

# the default environment for es-master
# set es node jvm args
ES_JVM_OPTS=-Xms256m -Xmx256m
# set master node data folder
MASTER_DATA_DIR=./master-data
# set master node logs folder
MASTER_LOGS_DIR=./master-logs

compose.yml-Docker Docker-Compose configuration file

version: "3"
services:
    es-master:
        image: elasticsearch:7.1.0
        container_name: es-master
        environment: # setting container env
            - ES_JAVA_OPTS=${ES_JVM_OPTS}   # set es bootstrap jvm args
        restart: always
        volumes:
            - ./es-config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
            - ${MASTER_DATA_DIR}:/usr/share/elasticsearch/data:rw
            - ${MASTER_LOGS_DIR}:/usr/share/elasticsearch/logs:rw
        network_mode: "host"

Simply put, modified pull the mirror, replacing the variables in the configuration file, mount the data and log directory, and finally with the host host mode, so that the service node to take up physical machine port

elaticsearch.yml elasticsearch profile, set up one of the most critical cluster file

# ======================== Elasticsearch Configuration =========================
cluster.name: es-cluster
node.name: master 
node.master: true
node.data: false
node.attr.rack: r1 
bootstrap.memory_lock: true 
http.port: 9200
network.host: 10.2.114.110
transport.tcp.port: 9300
discovery.seed_hosts: ["10.2.114.110:9301","10.2.114.110:9302","10.2.114.110:9303","10.2.114.110:9304"] 
cluster.initial_master_nodes: ["master"] 
gateway.recover_after_nodes: 2

According to previous articles down, they have not very familiar with these parameters, here simply under a few of the more important parameters

  • transport.tcp.port Setting es multi-node coordination of port numbers
  • discovery.seed_hostsAfter setting the current node starts to discover the coordinator node position , of course, they do not need find themselves, it is recommended to use ip: port form fast form clusters
  • cluster.initial_master_nodes Cluster node name can be a master node, specify only one here, to prevent split brain

Instructions for use

  1. To use this script to production, the need to modify each node .envfile, the mount data, the user log directory modified to start the cluster es writable position can sudo chmod 777 -R 目录or sudo chown -R 当前用户名:用户组 目录to modify the mounted directory competence
  2. Review. envThe JVM parameters, expanding the heap memory, the maximum value is preferably equal to start, to reduce the number of gc, efficiency
  3. Modifications all nodes docker-compose.ymlin the network.hostaddress of the currently placed host ip, discovery.seed_hostsrequired to complete each physical machine ip particular node to be discovered, can be clustered to ensure
  4. Ensure that the port is not occupied in its host, if required to confirm the usefulness of occupancy, useless kill, useful for the update docker-compose.ymlof http.portor transport.tcp.portattention to other nodes at the same time to update discovery.seed_hoststhe corresponding port
  5. If on the same host, you can use a simple shell script to refer to the article back

Each node operation command

Background boot command aredocker-compose up -d

Close the command :

  • docker-compose down: Close the container and at the same time removing the excess virtual NIC
  • docker stop contains_name: Closed container according to the name of the vessel, the vessel is not removed

Simple Shell Script

docker-es-cluster-up.sh

#/bin/bash
# please put this shell script to the root of each node folder.
# this shell script for start up the docker-es-cluster designed in the one of linux server.
cd docker-es-master && docker-compose up -d && \
cd ../docker-es-data01 && docker-compose up -d && \
cd ../docker-es-data02 && docker-compose up -d && \
cd ../docker-es-data03 && docker-compose up -d && \
cd ../docker-es-tribe && docker-compose up -d && \
cd ..

docker-es-cluster-down.sh

#/bin/bash
# please put this shell script to the root of each node folder.
# this shell script for remove the docker-es-cluster's containers and networks designed in the one of linux server.
cd docker-es-tribe && docker-compose down && \
cd ../docker-es-data03 && docker-compose down && \
cd ../docker-es-data02 && docker-compose down && \
cd ../docker-es-data01 && docker-compose down && \
cd ../docker-es-master && docker-compose down && \
cd ..

docker-es-cluster-stop.sh

#/bin/bash
# please put this shell script to the root of each node folder.
# this shell script for stop the docker-es-cluster's containers designed in the one of linux server.
docker stop es-tribe es-data03 es-data02 es-data01 es-master

If you want these scripts have execute permissions, trysudo chmod +x *.sh

The script does not use sudo, such as the need to use sudo to start docker, please add the current user group to docker

Enjoy.

This article is the original article, prohibited reproduced.

Guess you like

Origin www.cnblogs.com/hellxz/p/docker_es_cluster.html
Recommended