Elasticsearch: Deploy Elastic Stack 8.x with one click using Docker compose

In my previous article " Elasticsearch: Creating a Multi-Node Cluster - Elastic Stack 8.0 ", I described in detail how to create a multi-node cluster by typing different commands in different terminals via the command line. For some developers, we may not like typing so many commands in different terminals. In fact, a more convenient way is to use Docker compose to install. For Elastic Stack 7.x, you can refer to my previous article " Elastic: Deploying the Elastic Stack with Docker ".

By using the Docker compose approach, it provides an easy way to spin up a secure cluster that you can use for development before building a distributed deployment with multiple hosts. For building a multi-host distributed deployment, you can refer to my other article " Elasticsearch: Creating a Multi-Node Elasticsearch Cluster on Multiple Machines - Elastic Stack 8.0 ".

Preconditions

Install the appropriate Docker application for your operating system . If you use a Linux operating system, you need to install Docker compse .

Note : Make sure to allocate at least 4GB of memory for Docker. In Docker Desktop, you can configure resource usage on the Advanced tab in Preferences (macOS) or Settings (Windows).

Create the following configuration files in a new empty directory. These files are also available from the elastic/elasticsearch  repository on GitHub .

.env

The .env file sets the environment variables used when running the docker-compose.yml configuration file. Make sure to use the ELASTIC_PASSWORD and KIBANA_PASSWORD variables to specify passwords for the elastic and kibana_system users. These variables are referenced by the docker-compose.yml file.



1.  # Password for the 'elastic' user (at least 6 characters)
2.  ELASTIC_PASSWORD=password

4.  # Password for the 'kibana_system' user (at least 6 characters)
5.  KIBANA_PASSWORD=password

7.  # Version of Elastic products
8.  STACK_VERSION=8.1.2

10.  # Set the cluster name
11.  CLUSTER_NAME=docker-cluster

13.  # Set to 'basic' or 'trial' to automatically start the 30-day trial
14.  LICENSE=basic
15.  #LICENSE=trial

17.  # Port to expose Elasticsearch HTTP API to the host
18.  ES_PORT=9200
19.  #ES_PORT=127.0.0.1:9200

21.  # Port to expose Kibana to the host
22.  KIBANA_PORT=5601
23.  #KIBANA_PORT=80

25.  # Increase or decrease based on the available host memory (in bytes)
26.  MEM_LIMIT=1073741824

28.  # Project namespace (defaults to the current folder name if not set)
29.  #COMPOSE_PROJECT_NAME=myproject


复制代码

For the convenience of testing, we set both ELASTIC_PASSWORD and KIBANA_PASSWORD to password.

docker-compose.yml

This docker-compose.yml file creates a three-node secure Elasticsearch cluster with authentication and network encryption enabled, and a Kibana instance securely connected to it.

Expose port : This configuration exposes port 9200 on all network interfaces. Due to the way Docker handles ports, ports not bound to localhost make your Elasticsearch cluster publicly accessible, possibly ignoring any firewall settings. If you don't want to expose port 9200 to external hosts, set the value of ES_PORT in the .env file to something like 127.0.0.1:9200. Elasticsearch will only be accessible from the host itself.



1.  version: "2.2"

3.  services:
4.    setup:
5.      image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
6.      volumes:
7.        - certs:/usr/share/elasticsearch/config/certs
8.      user: "0"
9.      command: >
10.        bash -c '
11.          if [ x${ELASTIC_PASSWORD} == x ]; then
12.            echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
13.            exit 1;
14.          elif [ x${KIBANA_PASSWORD} == x ]; then
15.            echo "Set the KIBANA_PASSWORD environment variable in the .env file";
16.            exit 1;
17.          fi;
18.          if [ ! -f certs/ca.zip ]; then
19.            echo "Creating CA";
20.            bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
21.            unzip config/certs/ca.zip -d config/certs;
22.          fi;
23.          if [ ! -f certs/certs.zip ]; then
24.            echo "Creating certs";
25.            echo -ne \
26.            "instances:\n"\
27.            "  - name: es01\n"\
28.            "    dns:\n"\
29.            "      - es01\n"\
30.            "      - localhost\n"\
31.            "    ip:\n"\
32.            "      - 127.0.0.1\n"\
33.            "  - name: es02\n"\
34.            "    dns:\n"\
35.            "      - es02\n"\
36.            "      - localhost\n"\
37.            "    ip:\n"\
38.            "      - 127.0.0.1\n"\
39.            "  - name: es03\n"\
40.            "    dns:\n"\
41.            "      - es03\n"\
42.            "      - localhost\n"\
43.            "    ip:\n"\
44.            "      - 127.0.0.1\n"\
45.            > config/certs/instances.yml;
46.            bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
47.            unzip config/certs/certs.zip -d config/certs;
48.          fi;
49.          echo "Setting file permissions"
50.          chown -R root:root config/certs;
51.          find . -type d -exec chmod 750 \{\} \;;
52.          find . -type f -exec chmod 640 \{\} \;;
53.          echo "Waiting for Elasticsearch availability";
54.          until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
55.          echo "Setting kibana_system password";
56.          until curl -s -X POST --cacert config/certs/ca/ca.crt -u elastic:${ELASTIC_PASSWORD} -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
57.          echo "All done!";
58.        '
59.      healthcheck:
60.        test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
61.        interval: 1s
62.        timeout: 5s
63.        retries: 120

65.    es01:
66.      depends_on:
67.        setup:
68.          condition: service_healthy
69.      image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
70.      volumes:
71.        - certs:/usr/share/elasticsearch/config/certs
72.        - esdata01:/usr/share/elasticsearch/data
73.      ports:
74.        - ${ES_PORT}:9200
75.      environment:
76.        - node.name=es01
77.        - cluster.name=${CLUSTER_NAME}
78.        - cluster.initial_master_nodes=es01,es02,es03
79.        - discovery.seed_hosts=es02,es03
80.        - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
81.        - bootstrap.memory_lock=true
82.        - xpack.security.enabled=true
83.        - xpack.security.http.ssl.enabled=true
84.        - xpack.security.http.ssl.key=certs/es01/es01.key
85.        - xpack.security.http.ssl.certificate=certs/es01/es01.crt
86.        - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
87.        - xpack.security.http.ssl.verification_mode=certificate
88.        - xpack.security.transport.ssl.enabled=true
89.        - xpack.security.transport.ssl.key=certs/es01/es01.key
90.        - xpack.security.transport.ssl.certificate=certs/es01/es01.crt
91.        - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
92.        - xpack.security.transport.ssl.verification_mode=certificate
93.        - xpack.license.self_generated.type=${LICENSE}
94.      mem_limit: ${MEM_LIMIT}
95.      ulimits:
96.        memlock:
97.          soft: -1
98.          hard: -1
99.      healthcheck:
100.        test:
101.          [
102.            "CMD-SHELL",
103.            "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
104.          ]
105.        interval: 10s
106.        timeout: 10s
107.        retries: 120

109.    es02:
110.      depends_on:
111.        - es01
112.      image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
113.      volumes:
114.        - certs:/usr/share/elasticsearch/config/certs
115.        - esdata02:/usr/share/elasticsearch/data
116.      environment:
117.        - node.name=es02
118.        - cluster.name=${CLUSTER_NAME}
119.        - cluster.initial_master_nodes=es01,es02,es03
120.        - discovery.seed_hosts=es01,es03
121.        - bootstrap.memory_lock=true
122.        - xpack.security.enabled=true
123.        - xpack.security.http.ssl.enabled=true
124.        - xpack.security.http.ssl.key=certs/es02/es02.key
125.        - xpack.security.http.ssl.certificate=certs/es02/es02.crt
126.        - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
127.        - xpack.security.http.ssl.verification_mode=certificate
128.        - xpack.security.transport.ssl.enabled=true
129.        - xpack.security.transport.ssl.key=certs/es02/es02.key
130.        - xpack.security.transport.ssl.certificate=certs/es02/es02.crt
131.        - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
132.        - xpack.security.transport.ssl.verification_mode=certificate
133.        - xpack.license.self_generated.type=${LICENSE}
134.      mem_limit: ${MEM_LIMIT}
135.      ulimits:
136.        memlock:
137.          soft: -1
138.          hard: -1
139.      healthcheck:
140.        test:
141.          [
142.            "CMD-SHELL",
143.            "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
144.          ]
145.        interval: 10s
146.        timeout: 10s
147.        retries: 120

149.    es03:
150.      depends_on:
151.        - es02
152.      image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
153.      volumes:
154.        - certs:/usr/share/elasticsearch/config/certs
155.        - esdata03:/usr/share/elasticsearch/data
156.      environment:
157.        - node.name=es03
158.        - cluster.name=${CLUSTER_NAME}
159.        - cluster.initial_master_nodes=es01,es02,es03
160.        - discovery.seed_hosts=es01,es02
161.        - bootstrap.memory_lock=true
162.        - xpack.security.enabled=true
163.        - xpack.security.http.ssl.enabled=true
164.        - xpack.security.http.ssl.key=certs/es03/es03.key
165.        - xpack.security.http.ssl.certificate=certs/es03/es03.crt
166.        - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
167.        - xpack.security.http.ssl.verification_mode=certificate
168.        - xpack.security.transport.ssl.enabled=true
169.        - xpack.security.transport.ssl.key=certs/es03/es03.key
170.        - xpack.security.transport.ssl.certificate=certs/es03/es03.crt
171.        - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
172.        - xpack.security.transport.ssl.verification_mode=certificate
173.        - xpack.license.self_generated.type=${LICENSE}
174.      mem_limit: ${MEM_LIMIT}
175.      ulimits:
176.        memlock:
177.          soft: -1
178.          hard: -1
179.      healthcheck:
180.        test:
181.          [
182.            "CMD-SHELL",
183.            "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
184.          ]
185.        interval: 10s
186.        timeout: 10s
187.        retries: 120

189.    kibana:
190.      depends_on:
191.        es01:
192.          condition: service_healthy
193.        es02:
194.          condition: service_healthy
195.        es03:
196.          condition: service_healthy
197.      image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
198.      volumes:
199.        - certs:/usr/share/kibana/config/certs
200.        - kibanadata:/usr/share/kibana/data
201.      ports:
202.        - ${KIBANA_PORT}:5601
203.      environment:
204.        - SERVERNAME=kibana
205.        - ELASTICSEARCH_HOSTS=https://es01:9200
206.        - ELASTICSEARCH_USERNAME=kibana_system
207.        - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
208.        - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
209.      mem_limit: ${MEM_LIMIT}
210.      healthcheck:
211.        test:
212.          [
213.            "CMD-SHELL",
214.            "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
215.          ]
216.        interval: 10s
217.        timeout: 10s
218.        retries: 120

220.  volumes:
221.    certs:
222.      driver: local
223.    esdata01:
224.      driver: local
225.    esdata02:
226.      driver: local
227.    esdata03:
228.      driver: local
229.    kibanadata:
230.      driver: local


复制代码

There are some scripts at command above. It first checks if we have set the ELASTIC_PASSWORD and KIBANA_PASSWORD variables. It will exit without it. The next step is to generate the required certificates. The steps here are a little tedious. If you want to know more about what these steps do, you can read my previous article " Security: How to Install Elastic SIEM and EDR ". It almost repeats all the steps of that article.

启动带有安全及正确配置好的集群

在上面,我们已经创建了 .env 及 docker-compose.yml 文件:



1.  $ pwd
2.  /Users/liuxg/data/elastic8
3.  $ ls -al
4.  total 24
5.  drwxr-xr-x    4 liuxg  staff   128 Apr  4 19:58 .
6.  drwxr-xr-x  153 liuxg  staff  4896 Feb 18 09:30 ..
7.  -rw-r--r--    1 liuxg  staff   728 Apr  4 19:57 .env
8.  -rw-r--r--    1 liuxg  staff  8095 Apr  4 19:58 docker-compose.yml


复制代码

我们安装如下的步骤来进行:

1)修改 .env 文件并为 ELASTIC_PASSWORD 和 KIBANA_PASSWORD 变量输入密码值。如上所示,在我们的练习中,为了测试方便,我们设置它们为 password。我们也设置 STACK_VERSION 为我们喜欢的版本。

注意:你必须使用 ELASTIC_PASSWORD 值与集群进行进一步交互。 KIBANA_PASSWORD 值仅在配置 Kibana 时在内部使用。

2)创建并启动三节点 Elasticsearch 集群和 Kibana 实例:

docker-compose up
复制代码

或者

docker-compose up -d
复制代码

如果你想让 docker-compose 在后台运行的话。

我们使用如下的命令来进行启动:

docker-compose up
复制代码

 等上面的命令运行起来后,我们可以通过如下的命令来检查:



1.  $ docker ps
2.  CONTAINER ID   IMAGE                                                 COMMAND                  CREATED              STATUS                        PORTS                              NAMES
3.  c8ea230a80b6   docker.elastic.co/kibana/kibana:8.1.2                 "/bin/tini -- /usr/l…"   About a minute ago   Up About a minute (healthy)   0.0.0.0:5601->5601/tcp             elastic8_kibana_1
4.  5d69e51a6364   docker.elastic.co/elasticsearch/elasticsearch:8.1.2   "/bin/tini -- /usr/l…"   About a minute ago   Up About a minute (healthy)   9200/tcp, 9300/tcp                 elastic8_es03_1
5.  be786e92fd1f   docker.elastic.co/elasticsearch/elasticsearch:8.1.2   "/bin/tini -- /usr/l…"   About a minute ago   Up About a minute (healthy)   9200/tcp, 9300/tcp                 elastic8_es02_1
6.  e1dafbdecec5   docker.elastic.co/elasticsearch/elasticsearch:8.1.2   "/bin/tini -- /usr/l…"   About a minute ago   Up About a minute (healthy)   0.0.0.0:9200->9200/tcp, 9300/tcp   elastic8_es01_1


复制代码

我们可以看到有4个容器已经运行起来了。 

我们可以通过如下的命令来登录一个容器,比如: elastic8_kibana_1

docker exec -it elastic8_kibana_1 /bin/bash
复制代码

我们可以通过如下的方式把容器里的 kibana.yml 文件拷贝出来:

docker cp elastic8_kibana_1:/usr/share/kibana/config/kibana.yml .
复制代码

我们可以查看一下 kibana.yml 的内容:

1.  $ pwd
2.  /Users/liuxg/data/elastic8
3.  $ ls -al
4.  total 32
5.  drwxr-xr-x    5 liuxg  staff   160 Apr  4 20:58 .
6.  drwxr-xr-x  153 liuxg  staff  4896 Feb 18 09:30 ..
7.  -rw-r--r--    1 liuxg  staff   728 Apr  4 19:57 .env
8.  -rw-r--r--    1 liuxg  staff  8152 Apr  4 20:54 docker-compose.yml
9.  -rw-rw-r--    1 liuxg  staff   271 Apr  4 21:00 kibana.yml
10.  $ cat kibana.yml 
11.  #
12.  # ** THIS IS AN AUTO-GENERATED FILE **
13.  #

15.  # Default Kibana configuration for docker target
16.  server.host: "0.0.0.0"
17.  server.shutdownTimeout: "5s"
18.  elasticsearch.hosts: [ "http://elasticsearch:9200" ]
19.  monitoring.ui.container.elasticsearch.enabled: true
20.  $ 
复制代码

我们在浏览器中打开地址 http://localhost:9200

还记得我们之前设置的 elastic 用户的密码吗?我们填入密码,并登录:

 

这样我们就进入到 Kibana 了。我们在 Kibana 中打入如下的命令来检查一下节点的数量:

GET _cat/nodes
复制代码

上面显示我们已经有三个节点。这是一个含有3个节点的集群。

停止及删除部署

完成实验后,你可以删除网络、容器和 volumes:

docker-compose down -v
复制代码

从文件中装载设置

直接在 Docker Compose 文件中指定 Elasticsearch 和 Kibana 的设置是一种方便的入门方式,但是在你通过实验阶段之后,最好从文件中加载设置。

For example, to use a custom es01.yml as the configuration file for the es01 Elasticsearch node, you can create a bind mount in the volumes section of the es01 service.



1.  volumes:
2.        - ./es01.yml:/usr/share/elasticsearch/config/elasticsearch.yml
3.        - ...


复制代码

Likewise, to load Kibana settings from a file, you can add the following mounts to the volumes section of the kibana service.

 1.   volumes:
2.        - ./kibana.yml:/usr/share/kibana/config/kibana.yml
3.        - ...
复制代码

To illustrate the problem, I want to set the Kibana interface to Chinese. We add the following line to the kibana.yml file we copied earlier:

i18n.locale: "zh-CN"
复制代码

kibana.yml



1.  $ pwd
2.  /Users/liuxg/data/elastic8
3.  $ ls -al
4.  total 32
5.  drwxr-xr-x    5 liuxg  staff   160 Apr  4 20:58 .
6.  drwxr-xr-x  153 liuxg  staff  4896 Feb 18 09:30 ..
7.  -rw-r--r--    1 liuxg  staff   728 Apr  4 19:57 .env
8.  -rw-r--r--    1 liuxg  staff  8152 Apr  4 20:54 docker-compose.yml
9.  -rw-rw-r--    1 liuxg  staff   271 Apr  4 21:00 kibana.yml
10.  $ cat kibana.yml 
11.  #
12.  # ** THIS IS AN AUTO-GENERATED FILE **
13.  #

15.  # Default Kibana configuration for docker target
16.  server.host: "0.0.0.0"
17.  server.shutdownTimeout: "5s"
18.  elasticsearch.hosts: [ "http://elasticsearch:9200" ]
19.  monitoring.ui.container.elasticsearch.enabled: true
20.  i18n.locale: "zh-CN"


复制代码

The content of the kibana.yml file is very simple. It contains only one line of words. We also modify docker-compose.yml as follows:

 We restarted again following the steps above. 



1.  docker-compose down -v
2.  docker-compose up


复制代码

After all the containers are up, we type http://localhost:5601 in the browser again :

The interface we see now is the Chinese interface.

Guess you like

Origin juejin.im/post/7082735047824015397