Java 的 JIT

What is JIT:

JIT compiler (just in time compiler), when the virtual machine finds that a method or code block is running very frequently, it will recognize these codes as (Hot Spot Code). In order to improve the execution efficiency of the hot spot code, At runtime, the virtual machine will compile these codes into machine codes related to the local platform, and perform various levels of optimization. It is the JIT compiler that accomplishes this task.
At present, the main hot spot determination methods are as follows:

  1. Sampling-based hot spot detection:
    The virtual machine using this method periodically checks the top of the stack of each thread. If it finds that certain methods often appear on the top of the stack, then this method code is "hot code". The advantage of this detection method is that the implementation is simple and efficient, and the method call relationship can be easily obtained. The disadvantage is that it is difficult to accurately confirm the popularity of a method, and it is easy to disturb the hot spot detection due to thread blocking or other external factors.
  2. Counter-based hot spot detection:
    The virtual machine using this method creates a counter for each method, even code block, and counts the number of executions of the method. If the number of executions exceeds a certain threshold, it is considered a "hot spot method". This kind of statistical method is more complicated to implement, it needs to establish and maintain a counter for each method, and the method call relationship cannot be directly obtained, but its statistical results are relatively more precise and rigorous.
    The HotSpot virtual machine uses the second type: counter-based hot spot detection method, so it prepares two counters for each method: method call counter and edge-back counter.
    • Method invocation counter The
    method invocation counter is used to count the number of method invocations. By default, the method invocation counter counts not the absolute number of method invocations, but a relative execution frequency, that is, the number of method invocations within a period of time. frequency.
    • The loopback counter is
    used to count the number of times the loop body code is executed in a method (to be precise, it should be the number of loopbacks, because not all loops are loopbacks), and the control flow jumps back when encountered in the bytecode The instruction to turn is called "back edge".
    JIT compilation. After the JIT compilation is triggered, under the default settings, the execution engine does not wait for the completion of the compilation request synchronously, but continues to enter the interpreter to execute the bytecode in an interpreted manner until the submitted request is compiled by the compiler (the compilation work is In a background thread). When the compilation is completed, the next time the method or code is called, the compiled version will be used.
    The method call counter triggers the process of just-in-time compilation (the process of triggering just-in-time compilation by the back-side counter is similar)
    JIT optimizes the performance of the JVM
    Generally, the JIT has the following methods to optimize the performance of the JVM
  3. For the compilation and optimization of specific CPU models, JVM will use the SIMD instruction set supported by different CPUs to compile hot codes to improve performance. The SSE2 instruction set supported by Intel can improve performance by nearly 40 times under certain circumstances.
  4. Reduce the number of table lookups. For example, if the Object.equals() method is called, if the equals of the String object is found at runtime, the compiled code can directly call the String.equals method, skipping the step of finding which method to call.
  5. Escape analysis. JAVA variables are allocated on the heap of main memory by default, but if the variables in the method do not escape the life cycle of use and will not be referenced by external methods or threads, consider allocating memory on the stack to reduce GC pressure. In addition, escape analysis can implement performance-enhancing methods such as lock optimization.
  6. Register allocation, some variables can be allocated in the register, compared to the main memory read, a greater improvement in read performance.
  7. Cache the compiled machine code for hot code. The code cache has a fixed size, and once it is filled, the JVM cannot compile more code.
  8. Method inlining is also a very useful optimization capability implemented by JIT, and it is also a place where developers can easily participate in JIT performance tuning.

What is method inlining. Why it can improve performance

To figure out why method inlining is useful, we must first know what happens when a function is called

  1. First, there will be a stack to store all currently active methods, as well as their local variables and parameters
  2. When a new method is called, a new stack frame will be added to the top of the stack, and the allocated local variables and parameters will be stored in this stack frame
  3. Jump to target method code execution
  4. When the method returns, the local method and parameters will be destroyed, and the top of the stack will be removed
  5. Return to the original address for execution.
    Therefore, function calls need to have a certain amount of time and space overhead. When a method is not large but is called frequently, the time and space overhead will become relatively large and become very uneconomical. At the same time, the performance of the program is reduced.
    Method inlining is the technique of "copying" the callee function code into the caller function to reduce the cost of function calls.
    The code before being inlined
    private int add4(int x1, int x2, int x3, int x4) { return add2(x1, x2) + add2(x3, x4); } private int add2(int x1, int x2) { return x1 + x2; }After running for a period of time, the code is translated inline to private int add4(int x1, int x2, int x3, int x4) { return x1 + x2 + x3 + x4; } JVM will automatically identify hot spots Methods, and use method inline optimization on them. So how many times does a piece of code need to be executed to trigger JIT optimization? Usually this value is set by the -XX:CompileThreshold parameter: 1. When using the client compiler, the default is 1500; 2. When using the server compiler, the default is 10000; but even if a method is marked as a hot method by the JVM, the JVM still does not It must be optimized inline. One of the more common reasons is that this method is too large, which can be divided into two situations.













    • If the method is executed frequently, by default, the method size less than 325 bytes will be inlined ( this size
    can be set by ** -XX:MaxFreqInlineSize=N ) • If the method is not executed frequently, by default , The method size is less than 35 bytes before inlining (  this size
    can be set by -XX:MaxInlineSize=N **) We can increase this size so that more methods can be inlined; but unless it can be significantly improved Performance, otherwise it is not recommended to modify this parameter. Because a larger method body will lead to more code memory usage, fewer hot methods will be cached, and the final effect may not be good.
    If you want to know how the method is inlined, you can use the following JVM parameters to configure
    -XX:+PrintCompilation: Prints out when JIT compilation happens
    -XX:+UnlockDiagnosticVMOptions: Is needed to use flags like -XX:+PrintInlining
    -XX :+PrintInlining: Prints what methods were inlined

in conclusion

Inline optimization suggestions for hotspot methods

  1. Smaller method body
  2. Try to use final, private, and static modifiers
  3. Use the +PrintInlining parameter to verify the effect

Guess you like

Origin blog.csdn.net/qq_17010193/article/details/114391421