Docker JVM configuration automatically senses allocated memory

Docker JVM configuration automatically senses allocated memory

Imagine you have a node with 32GB of RAM and you want to use Docker to run a Java application limited to 1GB. If the -Xmx parameter is not provided, the JVM will use its default configuration:

  • The JVM will check the total available memory. Because the JVM does not know the Linux container (especially the control group that limits the memory), it thinks it is running on the host and has access to the full 32GB of available memory.
  • By default, JVM will use MaxMemory/4, in this case 8GB (32GB/4).
  • As the heap size grows and exceeds 1GB, the container will be killed by Docker.

The early adopters of Docker Java for a while tried to understand why their JVM crashed without any error messages. To understand what happened, you need to check the Docker container that was killed. In this case, you will see a message saying "OOM was killed" (OutOf Memory).

Of course, an obvious solution is to use the Xmx parameter to fix the JVM's heap size, but this means you need to control the memory twice, once in Docker and once in the JVM. Whenever you want to make a change, you must do it twice. not ideal.

The first solution to this problem is to use the versions released by Java 8u131 and Java 9. I said the solution is because you have to use the beloved -XX:+ UnlockExperimentalVMOptions parameter. If you are engaged in financial services, I believe you are happy to explain to your customers or your boss that this is a wise move.

Then you have to use -XX:+UseCGroupMemoryLimitForHeap, which will tell the JVM to check the control group memory limit to set the maximum heap size.

Finally, you must use -XX:MaxRAMFraction to determine the maximum amount of memory that can be allocated for the JVM. Unfortunately, this parameter is a natural number. For example, setting the Docker memory limit to 1GB, you will have the following:

-XX: MaxRAMFraction = 1 The maximum heap size is 1GB. This is not very good, because you cannot give the JVM 100% of the allowed memory.

-XX: MaxRAMFraction = 2 The maximum heap size is 500MB. That's better, but now it seems we waste a lot of memory.

-XX: MaxRAMFraction = 3 The maximum heap size is 250MB. You are paying for 1GB of RAM, and your Java application can use 250MB. This is ridiculous

-XX: MaxRAMFraction = 4 is too small.

Basically, the JVM flag that controls the maximum available RAM is set as a fraction instead of a percentage, which makes it difficult to set a value that can effectively use the available (allowed) RAM.

We focus on memory, but the same applies to CPU. You need to use parameters like this

-Djava.util.concurrent.ForkJoinPool.common.parallelism = 2

Control the size of different thread pools in the application. 2 means two threads (the maximum will be limited to the number of hyperthreads available on the host).

All in all, with Java 8u131 and Java 9, you will have a similar configuration:

-XX:+UnlockExperimentalVMOptions
-XX:+UseCGroupMemoryLimitForHeap
-XX:MaxRAMFraction=2
-Djava.util.concurrent.ForkJoinPool.common.parallelism=2

Fortunately, Java 10 comes to the rescue. First of all, you don't have to use the scary experimental feature flag. If you run a Java application in a Linux container, the JVM will automatically detect the memory limit of the control group. Otherwise, you only need to add -XX:-UseContainerSupport.

Then, you can use -XX to control the memory: InitialRAMPercentage, -XX: MaxRAMPercentage and -XX: MinRAMPercentage. such as

  • Docker memory limit: 1GB
  • -XX:InitialRAMPercentage = 50
  • -XX:MaxRAMPercentage = 70

Your JVM will start with 500MB (50%) heap size and will grow to 700MB (70%) with a maximum available memory of 1GB in the container.

Guess you like

Origin blog.csdn.net/yucaifu1989/article/details/108106792