Docker deploys elasticsearch and kibana to step on the pit

When I was learning elasticsearch, I installed the course of the dark horse programmer and deployed es to the docker of the server.

The es deployment code is as follows:

docker run -d \
	--name es \
    -e "ES_JAVA_OPTS=-Xms512m -Xmx512m" \
    -e "discovery.type=single-node" \
    -v es-data:/usr/share/elasticsearch/data \
    -v es-plugins:/usr/share/elasticsearch/plugins \
    --privileged \
    --network es-net \
    -p 9200:9200 \
    -p 9300:9300 \
elasticsearch:7.12.1

The kibana deployment code is as follows:

docker run -d \
--name kibana \
-e ELASTICSEARCH_HOSTS=http://es:9200 \
--network=es-net \
-p 5601:5601  \
kibana:7.12.1

After the server executes these two pieces of code, docker ps shows that the two containers are running, but the server crashes soon after that, and then Xshell gets stuck, and returns to normal after an hour of the card group. After going to the server to execute docker ps, the container is found quit.

More exclusion, the reason for the error is that when deploying ES, because ES is implemented in Java, you need to specify jvm memory parameters for ES when starting ES. In the dark horse programmer's tutorial, the memory specified for ES is 512m.

My server's memory is 2g, and it should have enough space to run es, but in fact, when running the docker container, the space allocated by the linux system for docker is actually not 512m, but inside the docker container, it needs to be allocated for the internal es 512 memory, resulting in insufficient memory, parameter OOM (out of mereny) c error.

Solution: When starting the ES container, specify the parameter -m 1000m for the container, and manually allocate enough memory for the docker container.

Or deploy to a server with a little memory.

Guess you like

Origin blog.csdn.net/qq_45171957/article/details/123266038