Reason for k8s pod automatic restart (jvm memory setting)

Reason for k8s pod automatic restart (jvm memory setting)

 

In the k8s cluster environment, run the mirror mode in the docker container to start the springboot project. Since the created image does not configure the memory of the JVM, the JVM will set the stack size by default, which is allocated according to the memory of the physical machine. Then the larger the memory of the physical machine, the larger the memory allocated by default (maximum stack = 1/4 * physical machine memory, initial stack = 1/64 * physical machine memory). JVM does not know that it is running in a Docker container. JVM recognizes the memory of the physical host, not the memory allocated by k8s to the pod or the memory of the docker container.

Therefore, when the memory size of the jvm is not specified, and the physical memory of the machine is large, the memory Xms occupied by the jvm by default exceeds the memory allocated to the pod by k8s, causing the pod memory to overflow, and k8s keeps restarting the pod. It may also be that during the running process, jvm continues to apply for memory until the maximum heap memory Xmx, Xmx exceeds the memory allocated to the pod by k8s, so k8s automatically restarts the pod.

Solution: Display and declare jvm memory -Xms, -Xmx parameters in the startup script, for example: java -Xms1024m -Xmx1024m -jar app.jar

It may also be that the memory limit of the docker container is set, and the JVM is not configured for the created image, then the JVM defaults to the stack size. In this way, when the memory occupied by the jvm exceeds the limit of the docker container, the container will be killed by docker. Solution: The same is to set the jvm memory -Xms, -Xmx parameters, pay attention to be less than the memory limit of the docker container.

Guess you like

Origin blog.csdn.net/yucaifu1989/article/details/107785492