JIT just-in-time compiler

1. Interpreter and Compiler

When a Java program is running, it mainly executes bytecode instructions. Generally, these instructions will be interpreted and executed by an interpreter (Interpreter). This is interpreted execution.

When the virtual machine finds that a certain method or code block runs particularly frequently, it will identify these codes as hot codes . In order to improve the execution efficiency of hot codes, the virtual machine will compile these codes into machine codes related to the local platform at runtime, and perform various levels of optimization. The compiler that completes this task is called a just-in-time compiler (Just In Time Compiler, referred to as JIT compiler).
insert image description here
Two just-in-time compilers are built into the HotSpot virtual machine, called Client Compiler and Server Compiler, or C1 compiler and C2 compiler for short.

  • C1 Compiler
    A simple and fast compiler, the main focus is on local optimization, suitable for programs with short execution time or requirements for startup performance. The opening time is earlier, and C1 starts compiling shortly after the application starts

  • The C2 compiler
    is a compiler for performance tuning of long-running server-side applications, and is suitable for programs with long execution times or peak performance requirements. The running time is late, and it will wait for the program to run for a period of time before compiling


Regardless of whether the compiler used is C1 or C2, the combination of interpreter and compilation is called "mixed mode" in the virtual machine. You can force the virtual machine to run in "interpreter mode"
insert image description here
through parameters , where the compiler does not work at all, and all codes are executed using interpretation.-Xint

You can also use parameters -Xcompto force the virtual machine to run in "compiler mode". At this time, the compiled mode will be preferred to execute the program, but the interpreter still needs to intervene in the execution process when the compilation cannot be performed.


2. Hot code and hot spot detection

Hot codes are those codes that are called frequently. These will be compiled and cached for next use, but for those codes that are rarely executed, this compilation action is purely wasteful.


JVM provides a parameter -XX:ReservedCodeCacheSizeto limit the size of CodeCache, and the code compiled by JIT will be placed in CodeCache. The default value of JDK7 is 32m~48m, and the default value of JDK8 is 240m.

If the space is insufficient, JIT cannot continue to compile, and the compilation execution will become interpreted execution, and the performance will be lower. At the same time, the JIT compiler will always try to optimize the code, resulting in an increase in CPU usage.


Judging whether a piece of code is hot code or whether it needs to be compiled immediately is called hot spot detection . There are two main hot spot detection methods, which are as follows:

  1. Sampling-based hotspot detection: The virtual machine using this method periodically checks the stack tops of each thread, and if a certain (or some) method is found to frequently appear on the top of the stack, then this method is a "hot method".

    Disadvantages: The advantage of sampling-based hotspot detection is that it is simple and efficient to implement, and it is also easy to obtain the method call relationship (just expand the call stack). The disadvantage is that it is difficult to accurately confirm the heat of a method, and it is easy to be blocked by threads or The influence of other external factors disturbs the hotspot detection.

  2. Counter-based hotspot detection: The virtual machine using this method creates a counter for each method (even a code block), counts the execution times of the method, and considers it a "hot method" if the number of executions exceeds a certain threshold.

    Disadvantages: This statistical method is cumbersome to implement. It needs to establish and maintain a counter for each method, and it cannot directly obtain the calling relationship of the method, but its statistical results are relatively more accurate and rigorous.


The HotSpot virtual machine is based on the second type - counter-based hotspot detection, which prepares two types of counters for each method: method call counter and edge return counter

2.1, method call counter

It is used to count the number of times the method is called. Its default threshold is 1500 times in Client mode and 10000 times in Server mode. This threshold can be set artificially through virtual machine parameters -XX:CompileThreshold.

When a method is called, it will first check whether there is a JIT-compiled version of the method. If it exists, the compiled native code will be used first. If there is no compiled version, the method will be called The counter is incremented by 1, and then it is judged whether the sum of the method call counter and the return call counter exceeds the threshold of the method call counter. If it exceeds the threshold, a code compilation request for the method will be submitted to the just-in-time compiler.


If no settings are made, the method call counter counts not the absolute number of times the method is called, but a relative execution frequency, and the number of times the method is called within a period of time. When a certain time limit is exceeded, if the number of calls of the method is still not enough to submit it to the just-in-time compiler for compilation, the call counter of this method will be reduced by half. This process is called the decay of the method call counter heat.

For the attenuation of the method call counter heat mentioned above, we can also -XX:-UseCounterDecayturn off the heat attenuation through the virtual machine parameters, let the method counter count the absolute number of method calls, so that as long as the system runs long enough, most of the code will be compiled The compiler compiles native code. In addition, you can also use -XX:CounterHalfLifeTimethe parameter to set the time of the half-life cycle in seconds.


2.2. Edge return counter

Its function is to count the number of times the code of the loop body in a method is executed. In the bytecode, the instruction that the control flow jumps backward is called "back edge". Different from the method call counter, the return edge counter does not count the heat decay process, so this counter counts the absolute number of loop executions of the method.


Three, layered compilation

Based on the advantages and disadvantages of the above-mentioned C1 and C2 compilers, the virtual machine will generally start the strategy of layered compilation (enable layered compilation parameters: ), -XX:+TieredCompilationlayered compilation is based on the scale and time-consuming of compiler compilation and optimization, Different compilation levels are divided, including:

  1. At layer 0, the program is interpreted and executed, and the interpreter starts the performance monitoring function, which can trigger the compilation of layer 1.
  2. The first layer, also known as C1 compilation, compiles bytecode into native code, performs simple and reliable optimization, and adds performance monitoring logic if necessary.
  3. Layer 2, also known as C2 compilation, also compiles bytecode into native code, but will start some optimizations that take a long time to compile, and even perform some unreliable aggressive optimizations based on performance monitoring information.

After layered compilation is implemented, C1 and C2 will work at the same time, and many codes may be compiled multiple times. Use the C1 compiler to obtain higher compilation speed, and use the C2 compiler to obtain more compilation quality.


Fourth, compile optimization

4.1. Method inlining

The optimization behavior of method inlining is to copy the code of the target method into the calling method to avoid actual method calls.
insert image description here
The JVM automatically recognizes hot methods and optimizes them using method inlining. However, hotspot methods may not be inlined by the JVM. For example, if the method body is too large, the JVM will not perform the inline operation. By default, methods with a body smaller than 325 bytes will be inlined. We can set the size value -XX:FreqInlineSize=Nby .


4.2. Scalar replacement

Escape analysis proves that an object will not be accessed externally. If the object can be split, the object may not be created when the program is actually executed, but its member variables may be directly created instead.
insert image description here
After the object is split, the member variables of the object can be allocated on the stack or registers, and the original object does not need to allocate memory space. This kind of compilation optimization is called scalar replacement (provided that escape analysis needs to be turned on).

Guess you like

Origin blog.csdn.net/rockvine/article/details/124864757