Elasticsearch: Verify the Elasticsearch Docker image and install Elasticsearch

Elasticsearch is available as a Docker image. A list of all published Docker images and tags is available at www.docker.elastic.co . Source files are in Github . This package contains free and subscription features. Start a 30-day trial to try out all features.

Starting with Elasticsearch 8.0, security is enabled by default. When security is enabled, Elastic Stack security features require TLS encryption on the transport network layer, otherwise your cluster will fail to start.

Install Docker Desktop or Docker Engine

Install the appropriate Docker application for your operating system .

Note : Make sure to allocate at least 4GiB of memory to Docker. In Docker Desktop, you can configure resource usage on the Advanced tab of Preferences (macOS) or Settings (Windows).

Pull the Elasticsearch Docker image

Getting Elasticsearch for Docker is as easy as issuing a docker pull command against the Elastic Docker registry:

docker pull docker.elastic.co/elasticsearch/elasticsearch:8.8.0
$ docker pull docker.elastic.co/elasticsearch/elasticsearch:8.8.0
8.8.0: Pulling from elasticsearch/elasticsearch
16c1e5ae78fc: Already exists 
2a63fecd431d: Pull complete 
c709fa6210ed: Pull complete 
89732bc75041: Pull complete 
a47052f8e9bf: Pull complete 
7f91ecd93209: Pull complete 
af03b547f578: Pull complete 
8931370f2a7b: Pull complete 
ab2e468d9ee0: Pull complete 
5a44f6d27aab: Pull complete 
Digest: sha256:9aaa38551b4d9e655c54d9dc6a1dad24ee568c41952dc8cf1d4808513cfb5f65
Status: Downloaded newer image for docker.elastic.co/elasticsearch/elasticsearch:8.8.0
docker.elastic.co/elasticsearch/elasticsearch:8.8.0

Optional: Verify the Elasticsearch Docker image signature

Although this is optional, we strongly recommend that you verify the signature contained in the downloaded Docker image to ensure the image is valid.

Elastic images are signed using Cosign, part of the Sigstore project . Cosign supports container signing, verification and storage in the OCI registry.

Install the appropriate Cosign application for your operating system .

The container image signature verification of Elasticsearch v8.8.0 is as follows:

wget https://artifacts.elastic.co/cosign.pub 
cosign verify --key cosign.pub docker.elastic.co/elasticsearch/elasticsearch:8.8.0 

This command prints the check result and signature payload in JSON format:

Verification for docker.elastic.co/elasticsearch/elasticsearch:{version} --
The following checks were performed on each of these signatures:
  - The cosign claims were validated
  - Existence of the claims in the transparency log was verified offline
  - The signatures were verified against the specified public key

 Now that you have verified the Elasticsearch Docker image signature, you can start a single-node or multi-node cluster.

Start a single-node cluster with Docker

If you start a single-node Elasticsearch cluster in a Docker container, security is automatically enabled and configured for you. When starting Elasticsearch for the first time, the following security configurations are automatically made:

  • Generate certificates and keys for the transport and HTTP layers .
  • Transport Layer Security (TLS) configuration settings are written to elasticsearch.yml.
  • Generate a password for the elastic user.
  • Generate a registration token for Kibana.

You can then start Kibana and enter a registration token, which is valid for 30 minutes. This token automatically applies the security settings of the Elasticsearch cluster, authenticates to Elasticsearch with the kibana_system user, and writes the security configuration to kibana.yml.

The following command starts a single-node Elasticsearch cluster for development or testing.

1) Create a new docker network for Elasticsearch and Kibana

docker network create elastic

Tip : If you already have a network, you can use docker network rm <network_name> to delete the network named network_name.

$ docker network create elastic
3f510dc3fb931d808c98365b5126dbdda0e8aede4bb503f1f5aeeb8802bdfeb9

2) Start Elasticsearch in Docker. Generate a password for the elastic user and output it to the terminal, plus a registration token to register with Kibana.

docker run -e ES_JAVA_OPTS="-Xms1g -Xmx1g" --name es01 --net elastic -p 9200:9200 -p 9300:9300 -it docker.elastic.co/elasticsearch/elasticsearch:8.8.0

Tip : You may need to scroll back a bit in the terminal to see passwords and registration tokens 

3) Copy the generated password and registration token and save them in a safe place. These values ​​are only displayed when you start Elasticsearch for the first time.

Note : If you need to reset the password of the elastic user or other built-in users, please run the elasticsearch-reset-password  tool. The tool is located in the Elasticsearch /bin directory of the Docker container. For example:

docker exec -it es01 /usr/share/elasticsearch/bin/elasticsearch-reset-password

4) Copy the http_ca.crt security certificate from the Docker container to your local machine.

docker cp es01:/usr/share/elasticsearch/config/certs/http_ca.crt .
$ pwd
/Users/liuxg/tmp/certs
$ docker cp es01:/usr/share/elasticsearch/config/certs/http_ca.crt .
$ ls
http_ca.crt

5) Open a new terminal and make an authenticated call using the http_ca.crt file copied from the Docker container to verify that you can connect to the Elasticsearch cluster. Enter the password for the elastic user when prompted.

curl --cacert http_ca.crt -u elastic https://localhost:9200
$ curl --cacert http_ca.crt -u elastic https://localhost:9200
Enter host password for user 'elastic':
{
  "name" : "2ea501480d1a",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "xL5JbUBqTN6_-7dUYSSazw",
  "version" : {
    "number" : "8.8.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "c01029875a091076ed42cdb3a41c10b1a9a5a20f",
    "build_date" : "2023-05-23T17:16:07.179039820Z",
    "build_snapshot" : false,
    "lucene_version" : "9.6.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}

We copied the password of the elastic superuser from the above Elasticsearch startup window. If you can see the above output, then contribute. You have successfully run Elasticsearch on your computer!

add extra nodes

When you start Elasticsearch for the first time, the installation process configures a single-node cluster by default. This process also generates a registration token and prints it to your terminal. If you want the node to join an existing cluster, use the generated registration token to start a new node.

Generate a registration token:

Registration tokens are valid for 30 minutes. If you need to generate a new enrollment token, run the elasticsearch-create-enrollment-token tool on an existing node. The tool is located in the Elasticsearch bin directory of the Docker container.

For example, run the following command on the existing es01 node to generate a registration token for the new Elasticsearch node:

docker exec -it es01 /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node

1) In the terminal where the first node was started, copy the registration token generated to add a new Elasticsearch node.

2) On your new node, start Elasticsearch and include the generated registration token.

docker run -e ENROLLMENT_TOKEN="<token>" -e ES_JAVA_OPTS="-Xms1g -Xmx1g" --name es02 --net elastic -it docker.elastic.co/elasticsearch/elasticsearch:8.8.0

We can see the following information in the terminal of the first node:

Elasticsearch is now configured to join an existing cluster. We can check existing nodes with the following command:

curl --cacert http_ca.crt -u elastic https://localhost:9200/_cat/nodes
$ pwd
/Users/liuxg/tmp/certs
$ ls
http_ca.crt
$ curl --cacert http_ca.crt -u elastic https://localhost:9200/_cat/nodes
Enter host password for user 'elastic':
172.18.0.2 13 16 0 0.07 0.13 0.15 cdfhilmrstw * 8904985c21d9
172.18.0.3 48 16 0 0.07 0.13 0.15 cdfhilmrstw - f8a062a42e48

From the above output, we can see that there are two nodes in the cluster.

Set the JVM heap size

If you experience issues with containers running on the first node exiting when the second node starts, explicitly set the value for the JVM heap size. To configure the heap size manually, include the ES_JAVA_OPTS variable and set the values ​​for -Xms and -Xmx when starting each node. For example, the following command starts node es02 and sets the minimum and maximum JVM heap size to 1 GB:

docker run -e ES_JAVA_OPTS="-Xms1g -Xmx1g" -e ENROLLMENT_TOKEN="<token>" --name es02 -p 9201:9200 --net elastic -it docker.elastic.co/elasticsearch/elasticsearch:8.8.0

We did this in our example above. When I was doing the exercise, this parameter was not defined, and the result was exited.

Next step

You have now set up a test Elasticsearch environment. Before starting serious development or using Elasticsearch in production, review the requirements and recommendations below to apply when running Elasticsearch in Docker in a production environment.

Security certificate and key

When Elasticsearch is installed, the following certificates and keys are generated in the Elasticsearch configuration directory and are used to connect Kibana instances to your secure Elasticsearch cluster and encrypt inter-node communication. These files are listed here for reference.

http_ca.crt

The CA certificate used to sign certificates for this Elasticsearch cluster's HTTP layer.

http.p12

A keystore containing keys and certificates for this node's HTTP layer.

tranport.p12

A keystore containing transport layer keys and certificates for all nodes in the cluster.

http.p12 and transport.p12 are password-protected PKCS#12 keystores. Elasticsearch stores passwords for these keystores as security settings . To retrieve the password so you can inspect or change the keystore contents, use the bin/elasticsearch-keystore tool.

Retrieve the password for http.p12 with the following command:

bin/elasticsearch-keystore show xpack.security.http.ssl.keystore.secure_password

Retrieve the password for transport.p12 with the following command:

bin/elasticsearch-keystore show xpack.security.transport.ssl.keystore.secure_password
$ docker ps
CONTAINER ID   IMAGE                                                 COMMAND                  CREATED          STATUS          PORTS                                            NAMES
f8a062a42e48   docker.elastic.co/elasticsearch/elasticsearch:8.8.0   "/bin/tini -- /usr/l…"   50 minutes ago   Up 50 minutes   9200/tcp, 9300/tcp                               es02
8904985c21d9   docker.elastic.co/elasticsearch/elasticsearch:8.8.0   "/bin/tini -- /usr/l…"   52 minutes ago   Up 52 minutes   0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp   es01
$ docker exec -it es01 /bin/bash
elasticsearch@8904985c21d9:~$ ls
LICENSE.txt  NOTICE.txt  README.asciidoc  bin  config  data  jdk  lib  logs  modules  plugins
elasticsearch@8904985c21d9:~$ cd bin/
elasticsearch@8904985c21d9:~/bin$ cd ..
elasticsearch@8904985c21d9:~$ bin/elasticsearch-keystore show xpack.security.http.ssl.keystore.secure_password
pKee5zKnR6mq32YCFGsBew
elasticsearch@8904985c21d9:~$ bin/elasticsearch-keystore show xpack.security.transport.ssl.keystore.secure_password
Ca7GGTq9Qo2PgGWgPbc6Dg
elasticsearch@8904985c21d9:~$ 

Start a multi-node cluster with Docker Compose

To get a multi-node Elasticsearch cluster and Kibana up and running in Docker with security enabled, you can use Docker Compose.

This configuration provides an easy way to spin up a secure cluster that you can use for development before building a distributed deployment with multiple hosts.

prerequisites

Install the appropriate Docker application for your operating system .

If you're running on Linux, install Docker Compose .

Note : Make sure to allocate at least 4GB of memory to Docker. In Docker Desktop, you can configure resource usage on the Advanced tab of Preferences (macOS) or Settings (Windows).

prepare the environment

Create the following configuration files in a new empty directory. These files are also available from the elasticsearch repository on GitHub.

.env

The .env file sets the environment variables used when running the docker-compose.yml configuration file. Make sure to specify strong passwords for the elastic and kibana_system users using the ELASTIC_PASSWORD and KIBANA_PASSWORD variables. These variables are referenced by the docker-compose.yml file.

Important : Your password must be alphanumeric and cannot contain special characters such as ! or@. The bash script included in the docker-compose.yml file only works on alphanumeric characters.

# Password for the 'elastic' user (at least 6 characters)
ELASTIC_PASSWORD=

# Password for the 'kibana_system' user (at least 6 characters)
KIBANA_PASSWORD=

# Version of Elastic products
STACK_VERSION=8.8.0

# Set the cluster name
CLUSTER_NAME=docker-cluster

# Set to 'basic' or 'trial' to automatically start the 30-day trial
LICENSE=basic
#LICENSE=trial

# Port to expose Elasticsearch HTTP API to the host
ES_PORT=9200
#ES_PORT=127.0.0.1:9200

# Port to expose Kibana to the host
KIBANA_PORT=5601
#KIBANA_PORT=80

# Increase or decrease based on the available host memory (in bytes)
MEM_LIMIT=1073741824

# Project namespace (defaults to the current folder name if not set)
#COMPOSE_PROJECT_NAME=myproject

docker-compose.yml

This docker-compose.yml file creates a three-node secure Elasticsearch cluster with authentication and network encryption enabled, and a Kibana instance securely connected to it.

Expose port : This configuration exposes port 9200 on all network interfaces. Due to the way Docker handles ports, ports not bound to localhost make your Elasticsearch cluster publicly accessible, possibly ignoring any firewall settings. If you don't want to expose port 9200 to external hosts, set the ES_PORT value in the .env file to something like 127.0.0.1:9200. Elasticsearch will only be accessible from the host machine itself.

version: "2.2"

services:
  setup:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
    user: "0"
    command: >
      bash -c '
        if [ x${ELASTIC_PASSWORD} == x ]; then
          echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
          exit 1;
        elif [ x${KIBANA_PASSWORD} == x ]; then
          echo "Set the KIBANA_PASSWORD environment variable in the .env file";
          exit 1;
        fi;
        if [ ! -f config/certs/ca.zip ]; then
          echo "Creating CA";
          bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
          unzip config/certs/ca.zip -d config/certs;
        fi;
        if [ ! -f config/certs/certs.zip ]; then
          echo "Creating certs";
          echo -ne \
          "instances:\n"\
          "  - name: es01\n"\
          "    dns:\n"\
          "      - es01\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          "  - name: es02\n"\
          "    dns:\n"\
          "      - es02\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          "  - name: es03\n"\
          "    dns:\n"\
          "      - es03\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          > config/certs/instances.yml;
          bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
          unzip config/certs/certs.zip -d config/certs;
        fi;
        echo "Setting file permissions"
        chown -R root:root config/certs;
        find . -type d -exec chmod 750 \{\} \;;
        find . -type f -exec chmod 640 \{\} \;;
        echo "Waiting for Elasticsearch availability";
        until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
        echo "Setting kibana_system password";
        until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
        echo "All done!";
      '
    healthcheck:
      test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
      interval: 1s
      timeout: 5s
      retries: 120

  es01:
    depends_on:
      setup:
        condition: service_healthy
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata01:/usr/share/elasticsearch/data
    ports:
      - ${ES_PORT}:9200
    environment:
      - node.name=es01
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es02,es03
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es01/es01.key
      - xpack.security.http.ssl.certificate=certs/es01/es01.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es01/es01.key
      - xpack.security.transport.ssl.certificate=certs/es01/es01.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  es02:
    depends_on:
      - es01
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata02:/usr/share/elasticsearch/data
    environment:
      - node.name=es02
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es01,es03
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es02/es02.key
      - xpack.security.http.ssl.certificate=certs/es02/es02.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es02/es02.key
      - xpack.security.transport.ssl.certificate=certs/es02/es02.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  es03:
    depends_on:
      - es02
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata03:/usr/share/elasticsearch/data
    environment:
      - node.name=es03
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es01,es02
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es03/es03.key
      - xpack.security.http.ssl.certificate=certs/es03/es03.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es03/es03.key
      - xpack.security.transport.ssl.certificate=certs/es03/es03.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  kibana:
    depends_on:
      es01:
        condition: service_healthy
      es02:
        condition: service_healthy
      es03:
        condition: service_healthy
    image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
    volumes:
      - certs:/usr/share/kibana/config/certs
      - kibanadata:/usr/share/kibana/data
    ports:
      - ${KIBANA_PORT}:5601
    environment:
      - SERVERNAME=kibana
      - ELASTICSEARCH_HOSTS=https://es01:9200
      - ELASTICSEARCH_USERNAME=kibana_system
      - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
      - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
    mem_limit: ${MEM_LIMIT}
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

volumes:
  certs:
    driver: local
  esdata01:
    driver: local
  esdata02:
    driver: local
  esdata03:
    driver: local
  kibanadata:
    driver: local

Start the cluster with security enabled and configured

1) Modify the .env file and enter strong password values ​​for the ELASTIC_PASSWORD and KIBANA_PASSWORD variables.

NOTE : You must use the ELASTIC_PASSWORD value to further interact with the cluster. The KIBANA_PASSWORD value is only used internally when configuring Kibana.

2) Create and start a three-node Elasticsearch cluster and Kibana instance:

docker-compose up -d

3) Once the deployment has started, open a browser and navigate to http://localhost:5601 to access Kibana where you can load sample data and interact with the cluster.

Stop and delete the deployment

To stop the cluster, run docker-compose down. When you restart the cluster with docker-compose up, the data in the Docker volume will be preserved and loaded.

docker-compose down

To delete networks, containers and volumes when stopping the cluster, specify the -v option:

docker-compose down -v

Next step

You have now set up a test Elasticsearch environment. Before starting serious development or using Elasticsearch in production, review the requirements and recommendations below to apply when running Elasticsearch in Docker in a production environment.

Using Docker images in production

The following requirements and recommendations apply when running Elasticsearch in Docker in production.

Set vm.max_map_count to at least 262144

The vm.max_map_count kernel setting must be set to at least 262144 for production use.

How to set vm.max_map_count depends on your platform.

Linux

To see the current value of the vm.max_map_count setting, run:

grep vm.max_map_count /etc/sysctl.conf
vm.max_map_count=262144

To apply the settings on a live system, run:

sysctl -w vm.max_map_count=262144

To permanently change the value of the vm.max_map_count setting, update the value in /etc/sysctl.conf.

macOS and Docker for Mac

The vm.max_map_count setting must be set in the xhyve virtual machine:

1) Run from the command line:

screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty

2) Press enter and use sysctl to configure vm.max_map_count:

sysctl -w vm.max_map_count=262144

3) To exit the screen session, type Ctrl ad.

Windows and macOS with Docker Desktop

The vm.max_map_count setting must be set via docker-machine:

docker-machine ssh
sudo sysctl -w vm.max_map_count=262144

Windows with Docker Desktop WSL 2 backend

The vm.max_map_count setting must be set in the "docker-desktop" WSL instance before the Elasticsearch container can start properly. There are various ways to do this, depending on your version of Windows and version of WSL.

If you're on Windows 10 version 22H2, or if you're using the built-in version of WSL on Windows 10 version 22H2, you'll have to set it manually every time you restart Docker before starting the Elasticsearch container, or (if you don't want to every reboot) you have to globally set each WSL2 instance to change vm.max_map_count. This is because these versions of WSL do not handle the /etc/sysctl.conf file correctly.

To manually set it on every restart, you have to run the following command in a Command Prompt or PowerShell window every time you restart Docker:

wsl -d docker-desktop -u root
sysctl -w vm.max_map_count=262144

If you are using these versions of WSL and don't want to run these commands every time you restart Docker, you can globally change each WSL distribution with this setting by modifying %USERPROFILE%\.wslconfig as follows :

[wsl2]
kernelCommandLine = "sysctl.vm.max_map_count=262144"

This will cause all WSL2 VMs to be assigned that setting on startup.

If you are using Windows 11 or Windows 10 version 22H2 and installed the Microsoft Store version of WSL, you can modify /etc/sysctl.conf in the "docker-desktop" WSL distribution, possibly with a command like this:

wsl -d docker-desktop -u root
vi /etc/sysctl.conf
并附加一行内容如下:
vm.max_map_count = 262144

The configuration file must be readable by the elasticsearch user

By default, Elasticsearch runs inside the container as user elasticsearch with uid:gid 1000:0 .

Important : An exception is Openshift , which runs containers with an arbitrarily assigned user ID. Openshift provides persistent volumes with gid set to 0, which should work without any tweaking.

If you are bind-mounting a local directory or file, it must be readable by the elasticsearch user. Also, this user must have write permissions to the config, data, and log directories (Elasticsearch needs write permissions to the config directory so it can generate the keystore). A good strategy is to grant the group access to gid 0 of the local directory.

For example, prepare a local directory for storing data via bind mount:

mkdir esdatadir
chmod g+rwx esdatadir
chgrp 0 esdatadir

You can also run Elasticsearch containers with custom UID and GID. You must ensure that file permissions do not prevent Elasticsearch from executing. You can use one of two options:

  • Bind mount the config, data and logs directories. If you plan to install plugins and don't want to create a custom Docker image , you must also bind mount the plugin directory.
  • Pass the --group-add 0 command line option to docker run. This ensures that the user running Elasticsearch is also a member of the root (GID 0) group within the container.

Increase the ulimits of nofile and nproc

The added nofile and nproc ulimits must be available for the Elasticsearch container. Verify that the Docker daemon's init system sets them to acceptable values.

To check the Docker daemon defaults for ulimits, run:

docker run --rm docker.elastic.co/elasticsearch/elasticsearch:{version} /bin/bash -c 'ulimit -Hn && ulimit -Sn && ulimit -Hu && ulimit -Su'

Adjust them in the daemon or override them per container if needed. For example, when using docker run, set:

--ulimit nofile=65535:65535

disable swapping

For performance and node stability, swapping needs to be disabled. See disable swapping for information on ways to do this .

If you choose the bootstrap.memory_lock: true approach, you will also need to define the memlock: true ulimit in the Docker daemon , or set it explicitly for the container as shown in the example compose file shown above. When using docker run you can specify:

-e "bootstrap.memory_lock=true" --ulimit memlock=-1:-1

Randomize published ports

The image exposes TCP ports 9200 and 9300. For production clusters, it is recommended to use --publish-all to randomize published ports, unless you pin one container per host.

Manually set the heap size

By default, Elasticsearch automatically adjusts the JVM heap size based on the role of the node and the total memory available to the node container. For most production environments, we recommend using this default size. You can override the default size by manually setting the JVM heap size if desired.

To manually set the heap size in production, bundle a JVM options file under /usr/share/elasticsearch/config/jvm.options.d that contains your desired heap size setting.

For testing, you can also manually set the heap size using the ES_JAVA_OPTS environment variable. For example, to use 16GB, specify -e
ES_JAVA_OPTS="-Xms16g -Xmx16g" with docker run. The ES_JAVA_OPTS variable overrides all other JVM options. We do not recommend using ES_JAVA_OPTS in production. The above docker-compose.yml file sets the heap size to 512MB.

Pin a deployment to a specific image version

Pin your deployment to a specific version of the Elasticsearch Docker image. For example docker.elastic.co/elasticsearch/elasticsearch:8.8.0.

Always bind data volumes

You should use volumes bound to /usr/share/elasticsearch/data for the following reasons:

  1. If the container is killed, the data of your Elasticsearch node will not be lost
  2. Elasticsearch is I/O sensitive and Docker storage driver is not suitable for fast I/O
  3. It allows the use of advanced Docker volume plugins

Avoid using loop-lvm mode

If you are using the devicemapper storage driver, please do not use the default loop-lvm mode. Configure docker-engine to use direct-lvm .

Centralize your logs

Consider using a different logging driver to centralize your logs. Also note that the default json file logging driver is not very suitable for production use.

Configure Elasticsearch with Docker

When you run in Docker, Elasticsearch configuration files are loaded from /usr/share/elasticsearch/config/.

To use a custom configuration file, you can bind-mount the file on top of the configuration file in the image.

You can set individual Elasticsearch configuration parameters using Docker environment variables. The combined file and single node examples above use this approach. You can directly use the setting name as the environment variable name. If you can't do this, for example because your orchestration platform forbids the use of periods in environment variable names, then you can use an alternate style by transforming the setting name as follows.

  1. Change the setting name to uppercase
  2. prefixed with ES_SETTING_
  3. Escape any underscores (_) by copying
  4. Convert all periods (.) to underscores (_)

For example, -e bootstrap.memory_lock=true becomes -e ES_SETTING_BOOTSTRAP_MEMORY__LOCK=true.

You can use the contents of the file to set the value of the ELASTIC_PASSWORD or KEYSTORE_PASSWORD environment variable by appending _FILE to the environment variable name. This is useful for passing passwords, such as passwords, to Elasticsearch without specifying them directly.

For example, to set the Elasticsearch bootstrap password from a file, you can bind mount the file and set the ELASTIC_PASSWORD_FILE environment variable to the mount location. If mounting the password file to /run/secrets/bootstrapPassword.txt, specify:

-e ELASTIC_PASSWORD_FILE=/run/secrets/bootstrapPassword.txt

You can override the image's default command to pass Elasticsearch configuration parameters as command-line options. For example:

docker run <various parameters> bin/elasticsearch -Ecluster.name=mynewclustername

While bind mounting configuration files is generally the preferred method in production, you can also create custom Docker images that include your configuration.

Mount the Elasticsearch configuration file

Create custom configuration files and bind mount them to the corresponding files in the Docker image. For example, to bind-mount custom_elasticsearch.yml with docker run , specify:

-v full_path_to/custom_elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml

If you bind mount a custom elasticsearch.yml file, make sure it includes the network.host: 0.0.0.0 setting. This setting ensures that nodes are accessible to HTTP and transit traffic, provided their ports are exposed. The built-in elasticsearch.yml file of the Docker image contains this setting by default.

Important: The container runs Elasticsearch as user elasticsearch with uid:gid 1000:0. Bind-mounted host directories and files must be accessible by this user, and data and log directories must be writable by this user.

Create an encrypted Elasticsearch keystore

By default, Elasticsearch automatically generates a keystore file for security setup. This file is obfuscated but not encrypted.

To encrypt your security settings with a passphrase and keep them outside the container, use the docker run command to manually create a keystore. The command must:

  • Bind mount the config directory. This command will create an elasticsearch.keystore file in this directory. To avoid errors, do not bind mount the elasticsearch.keystore file directly.
  • Use the elasticsearch-keystore tool with the create -p option. You will be prompted for a password for the keystore.

For example:

docker run -it --rm \
-v full_path_to/config:/usr/share/elasticsearch/config \
docker.elastic.co/elasticsearch/elasticsearch:8.8.0 \
bin/elasticsearch-keystore create -p

You can also add or update security settings in the keystore using the docker run command. You will be prompted to enter a value for the setting. If the keystore is encrypted, you will also be prompted for the keystore password.

docker run -it --rm \
-v full_path_to/config:/usr/share/elasticsearch/config \
docker.elastic.co/elasticsearch/elasticsearch:8.8.0 \
bin/elasticsearch-keystore \
add my.secure.setting \
my.other.secure.setting

If you have already created the keystore and do not need to update it, you can directly bind mount the elasticsearch.keystore file. You can provide the keystore password to the container at startup using the KEYSTORE_PASSWORD environment variable. For example, the docker run command might have the following options:

-v full_path_to/config/elasticsearch.keystore:/usr/share/elasticsearch/config/elasticsearch.keystore
-e KEYSTORE_PASSWORD=mypassword

Use a custom Docker image

In some environments, it might make more sense to prepare a custom image that includes your configuration. A Dockerfile to achieve this could be as simple as:

FROM docker.elastic.co/elasticsearch/elasticsearch:8.8.0
COPY --chown=elasticsearch:elasticsearch elasticsearch.yml /usr/share/elasticsearch/config/

Then you can build and run the image:

docker build --tag=elasticsearch-custom .
docker run -ti -v /usr/share/elasticsearch/data elasticsearch-custom

Some plugins require additional security permissions. You must accept them explicitly by:

  • Attach a tty when running the Docker image, and allow permissions when prompted.
  • Check security permissions and accept them if applicable by adding the --batch flag to the plugin install command.

For more information, see Plugin Management .

Troubleshooting Docker errors for Elasticsearch

Here's how to fix common errors when running Elasticsearch with Docker.

elasticsearch.keystore is a directory

Exception in thread "main" org.elasticsearch.bootstrap.BootstrapException: java.io.IOException: Is a directory: SimpleFSIndexInput(path="/usr/share/elasticsearch/config/elasticsearch.keystore") Likely root cause: java.io.IOException: Is a directory

The docker run command related to the keystore tries to directly bind mount the non-existing elasticsearch.keystore file. If you use the -v or --volume flag to mount a file that doesn't exist, Docker will create a directory with the same name.

To resolve this error:

  1. Delete the elasticsearch.keystore directory under the config directory.
  2. Update the -v or --volume flag to point to the configuration directory path instead of the path to the keystore file. For an example, see Creating an Encrypted Elasticsearch Keystore .
  3. Retry the command.

elasticsearch.keystore: Device or resource busy

Exception in thread "main" java.nio.file.FileSystemException: /usr/share/elasticsearch/config/elasticsearch.keystore.tmp -> /usr/share/elasticsearch/config/elasticsearch.keystore: Device or resource busy

The docker run command attempts to update the keystore when the elasticsearch.keystore file is directly bind-mounted. To update the keystore, the container needs to access other files in the configuration directory, such as keystore.tmp.

To resolve this error:

Read more:  Kibana: Install Kibana using Docker - 8.x

Guess you like

Origin blog.csdn.net/UbuntuTouch/article/details/130909246