3. Runtime data area and program counter

3.1. Runtime data area

3.1.1. Overview

This section mainly talks about the runtime data area, which is the part in the picture below. It is the stage after the class loading is completed
Insert image description here
when we pass the previous steps: class loading -> verification -> preparation -> parsing -> initialization. After this stage is completed, the execution engine will be used to use our classes. At the same time, the execution engine will use our runtime data area. Memory is a
Insert image description here
very important system resource. It is the intermediate warehouse and bridge between the hard disk and the CPU, carrying The real-time running JVM memory layout of the operating system and applications stipulates the memory application, allocation, and management strategies for Java during operation, ensuring the efficient and stable operation of the JVM. Different JVMs have some differences in their memory division methods and management mechanisms. Combined with the JVM virtual machine specification, let's discuss the classic JVM memory layout.
Insert image description here
We compare the things behind the chef (cut vegetables, knives, seasonings) to the runtime data area. The chef can be likened to an execution engine to make exquisite dishes from the prepared things. The
Insert image description here
data we get through disk or network IO needs to be loaded into the memory first, and then the CPU obtains the data from the memory and reads it, that is It is said that memory acts as a bridge between the CPU and the disk.
Insert image description here
The Java virtual machine defines several runtime data areas that will be used during program running. Some of them will be created when the virtual machine starts and destroyed when the virtual machine exits. Others correspond one-to-one with threads. These data areas corresponding to threads will be created and destroyed as the threads start and end.

The gray ones are private to a single thread, and the red ones are shared by multiple threads. Right now:

● Each thread: independently includes program counter, stack, and local stack.
● Sharing between threads: Heap, off-heap memory (permanent generation or metaspace, code cache).
Insert image description here
Each JVM has only one Runtime instance. It is the runtime environment, which is equivalent to the box in the middle of the memory structure: the runtime environment.
Insert image description here

3.1.2. Threads

A thread is a unit of execution in a program. The JVM allows an application to have multiple threads executing in parallel. In the Hotspot JVM, each thread is directly mapped to the operating system's native thread.

When a Java thread is ready for execution, an operating system's native thread is also created at the same time. After the Java thread execution terminates, the local thread will also be recycled.

The operating system is responsible for scheduling all threads to any available CPU. Once the local thread is initialized successfully, it calls the run() method in the Java thread.

3.1.3. JVM system thread

If you use jconsole or any debugging tool, you can see that there are many threads running in the background. These background threads do not include the main thread that calls public static void main(String[] args) and all threads created by the main thread itself.

These main background system threads in Hotspot JVM are mainly the following:

● Virtual machine thread: The operation of this thread requires the JVM to reach a safe point before it appears. The reason these operations must happen in different threads is that they all require the JVM to reach a safe point so that the heap does not change. The execution types of this thread include "stop-the-world" garbage collection, thread stack collection, thread suspension, and biased lock revocation.
● Periodic task thread: This kind of thread is the embodiment of time periodic events (such as interrupts). They are generally used for the scheduling and execution of periodic operations.
● GC thread: This thread provides support for different types of garbage collection behaviors in the JVM.
● Compilation thread: This thread compiles bytecode into native code at runtime.
● Signal dispatch thread: This thread receives signals and sends them to the JVM, which processes them internally by calling appropriate methods.

3.2. Program counter (PC register)

Among the Program Counter Registers in the JVM, Register is named after the CPU register, which stores on-site information related to instructions. The CPU can only run if it loads data into registers. Here, it is not a physical register in a broad sense. Perhaps it would be more appropriate to translate it as a PC counter (or instruction counter) (also called a program hook), and it is not likely to cause unnecessary misunderstandings. The PC register in the JVM is an abstract simulation of the physical PC register.
Insert image description here
The
PC register is used to store the address pointing to the next instruction, which is the instruction code to be executed. The next instruction is read by the execution engine.
Insert image description here
It is a small memory space and can almost be ignored. It is also the fastest storage area.

In the JVM specification, each thread has its own program counter, which is private to the thread, and its life cycle is consistent with the life cycle of the thread.

There is only one method executing in a thread at any time, which is the so-called current method . The program counter stores the JVM instruction address of the Java method being executed by the current thread; or, if a native method is being executed, an unspecified value (undefined).

It is an indicator of program control flow. Basic functions such as branches, loops, jumps, exception handling, and thread recovery all rely on this counter to complete.

When the bytecode interpreter works, it selects the next bytecode instruction that needs to be executed by changing the value of this counter.

It is the only area in the Java Virtual Machine Specification that does not specify any OutofMemoryError conditions.
for example

public int minus(){
    
    
    intc = 3;
    intd = 4; 
    return c - d;
}

Bytecode file:

0: iconst_3
1: istore_1
2: iconst_4
3: istore_2
4: iload_1
5: iload_2
6: isub
7: ireturn

Insert image description here

What is the use of using PC register to store bytecode instruction address?
Why use the PC register to record the execution address of the current thread?

Because the CPU needs to constantly switch between threads, after switching back at this time, it has to know where to start and continue execution.

The JVM's bytecode interpreter needs to change the value of the PC register to determine what bytecode instruction should be executed next.
Insert image description here
Why are PC registers set to private?

We all know that the so-called multi-threading method will only execute one of the threads in a specific period of time. The CPU will constantly switch tasks, which will inevitably lead to frequent interruptions or recovery. How to ensure that there is no difference? In order to accurately record the current bytecode instruction address being executed by each thread, the best way is to allocate a PC register to each thread, so that each thread can perform independent calculations, so that there will be no mutual interference.

Due to the CPU time slice wheel limitation, during the concurrent execution of many threads, at any given moment, a processor or a core in a multi-core processor will only execute one instruction in a certain thread.

This will inevitably lead to frequent interruptions or restorations. How to ensure that the points are correct? After each thread is created, it will generate its own program counter and stack frame. The program counters do not affect each other between threads.

CPU time slice

The CPU time slice is the time allocated by the CPU to each program. Each thread is assigned a time period, called its time slice.

At a macro level: We can open multiple applications at the same time, and each program runs in parallel and at the same time.

But on a micro level: Since there is only one CPU and can only process part of the program requirements at a time, one way to deal with fairness is to introduce time slices and each program executes in turn.

The essence of concurrency is that one physical CPU (or multiple physical CPUs) is multiplexed between several programs. Concurrency is the forced sharing of limited physical resources by multiple users to improve efficiency.
Parallelism means that two or more events or activities occur at the same time. In a multiprogramming environment, parallelism allows multiple programs to be executed simultaneously on different CPUs at the same time.

Insert image description here

Guess you like

Origin blog.csdn.net/picktheshy/article/details/132528148