Execution engine of JVM study notes

table of Contents

background

Overview

Java code compilation and execution process

Interpreter and JIT compiler

Static Ahead Of Time Compiler AOT (Ahead Of Time)

JIT compiler

Method call counter and backside counter

Method call counter

Back-to-edge counter

Set program execution mode

server mode and client mode

C1 compiler

C2 compiler

Hierarchical compilation


background

The following are the study notes of the JVM execution engine. In addition, the execution example of the execution engine can be found in the relevant part of the operand stack in the article JVM study notes (Overview-Local Method Stack)

Overview

The execution engine of the virtual machine is implemented by the software itself and can execute instruction set formats that are not directly supported by the hardware

The main task of JVM is to load bytecode into the internal, and the execution engine is responsible for interpreting/compiling bytecode instructions into machine instructions on the corresponding platform

 

The bytecode instructions executed by the execution engine in the execution process are completely dependent on the PC register. After each instruction is executed, the PC register will update the address of the next instruction. During the execution of the method, the execution engine may accurately locate the object instance stored in the heap space through the object reference stored in the local variable table, and locate the type information of the target object through the metadata pointer in the object header.

 

From the appearance, the input and output of all JVM execution engines are the same. The input is a binary stream of bytecode, the processing process is the equivalent process of bytecode analysis and execution, and the output is the execution result.

Java code compilation and execution process

Before most of the program code is converted into the target code of the physical machine or the instructions of the virtual machine, it has to go through the various steps in the following figure

For the Java language, the front-end compiler javac is responsible for the part from the program source code to the generation of the abstract syntax tree; the abstract syntax tree to the interpretation and execution is the work of the interpreter, and the abstract syntax tree to the target code is the work of the back-end compiler

The flowchart of java code compilation is shown below

The execution process of java bytecode is shown in the figure below

Interpreter and JIT compiler

Interpreter: Interpret bytecode instructions line by line, and translate bytecode instructions into machine instructions for execution

JIT compiler: Compile hot bytecode into machine language instead of executing it immediately

 

Interpreter advantage: fast response speed, saving compilation time

JIT compiler advantage: fast execution speed

Static Ahead Of Time Compiler AOT (Ahead Of Time)

Compiling java files directly into local machine code can increase the speed of the first run, but must provide corresponding release packages for each different hardware and OS, and reduce the dynamics of the java link process, and continue to optimize. At present Only supports Linux x64

JIT compiler

According to the code execution frequency, the hot code to be executed repeatedly is selected, deep optimization is performed, and the local machine instructions are compiled.

Typical hot code, such as methods that are called multiple times, and loop bodies that loop multiple times. Since this compilation method occurs on the stack, it is called OSR (On Stack Replacement).

 

The determination of the repeated execution threshold of the hotspot code depends on the hotspot detection function. The hotspot detection currently used by HotSpot is based on the counter-based hotspot detection

Method call counter and backside counter

HotSpot will create two different types of counters for each method, namely the method call counter and the back edge counter.

Method call counter

The method call counter counts the number of method calls, and the back side counter counts the number of loop cycles

 

The threshold of the method call counter is 1500 times in the client mode and 10000 times in the server mode. Exceeding this threshold will trigger JIT compilation

This threshold can be set by the parameter -XX:CompileThreshold

When a method is called, it will first check whether the method has a JIT-compiled version, if it exists, execute the corresponding machine code directly; if not, the method call counter +1, judge whether the counter value exceeds the threshold, then JIT Compile into machine instructions and cache them in the JIT method cache, and finally execute the machine code; if the counter value does not exceed the threshold, it will be interpreted and executed.

Heat decay and half-life:

The method call counter counts the number of method calls in a period of time. When a certain time limit is exceeded and the number of calls of this method is not enough to trigger JIT compilation, then the number of calls will be halved. This process is The heat decay, the length of this period of time is called the half-life period.

 

The execution of heat decay is done by the way during garbage collection. You can use the parameter -XX:-UseCounterDecay to turn off heat decay. At this time, the method call counter counts the absolute number of method calls.

The half-life can be set using the parameter -XX:CounterHalfLifeTime, in seconds

Back-to-edge counter

The back edge counter is used to count the execution times of a certain loop body. The case where the control flow jumps backward in the bytecode is called back edge. This counter is used to trigger OSR compilation.

For a loop body, when the sum of the value of its return edge counter and the method call counter of the method exceeds the threshold, JIT compilation will be triggered

 

Set program execution mode

-Xint: pure interpreter execution

-Xcomp: pure JIT execution

-Xmixed: Mixed execution

C:\Users\songzeceng>java -Xint -version
java version "1.8.0_231"
Java(TM) SE Runtime Environment (build 1.8.0_231-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.231-b11, interpreted mode)


C:\Users\songzeceng>java -Xcomp -version
java version "1.8.0_231"
Java(TM) SE Runtime Environment (build 1.8.0_231-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.231-b11, compiled mode)


C:\Users\songzeceng>java -Xmixed -version
java version "1.8.0_231"
Java(TM) SE Runtime Environment (build 1.8.0_231-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.231-b11, mixed mode)

server mode and client mode

The 64-bit default server mode (and only 64-bit), you can specify the JVM operating mode through -client and -server

The server mode uses C2 compiler (C++ implementation), the optimization time is longer, the method is more radical, and the code execution efficiency is high

The client mode uses the C1 compiler to perform simple and reliable optimizations, which takes a short time, but the code execution efficiency is low

C1 compiler

The C1 compiler mainly performs method inlining, de-virtualization, and redundancy elimination

Method inlining: Compile the referenced function code to the reference point, which can reduce the generation of stack frames, reduce the transfer of parameters and the jump process

Devirtualization: inline the only implementation class

Redundancy elimination: fold out some code that will not be executed during operation

C2 compiler

The optimizations used by the C2 compiler are optimizations based on escape analysis, including scalar replacement, stack allocation, and synchronization elimination

Scalar replacement: replace the attribute value of the aggregate object with a scalar value

Allocation on the stack: For objects that have not escaped, they are allocated on the stack, not on the heap

Sync clear: clear sync operation

Hierarchical compilation

Hierarchical compilation: If performance monitoring is not turned on, then program interpretation and execution can trigger C1 compilation; when performance monitoring is turned on, C2 compiler will perform radical optimization based on performance monitoring information

After jdk8, the server mode will open the hierarchical compilation strategy, and C1 and C2 will cooperate to perform compilation tasks

 

C2 compiler startup is slower than C1, but the code execution efficiency is much higher

Guess you like

Origin blog.csdn.net/qq_37475168/article/details/106491851