JVM_03 runtime data area 1_pc

01 memory

Memory is a very important system resource. It is an intermediate warehouse and bridge between hard disk and cpu, which carries the real-time operation of operating systems and applications. The memory layout of JVM stipulates the strategy of memory application, allocation, and management of JAVA in the running process, which ensures the efficient and stable operation of JVM. Different JVMs have some differences in the memory division and management mechanism (for Hotspot, it mainly refers to the method area)

Insert picture description here

(Picture source Ali) The metadata area + JIT compilation product of JDK8 is the method area before JDK8

02 Partition introduction

The java virtual machine defines a number of runtime data areas that will be used during the running of the program, some of which will be created when the virtual machine is started, and destroyed when the virtual machine exits. Others have a one-to-one correspondence with counties, and these thread-corresponding data areas will be created and destroyed as the thread starts and ends.
As shown in the figure, the gray area is private to a single thread, and the red area is shared by multiple threads, namely

  • Each thread: independently includes program counter, stack, and local stack
  • Sharing between threads: heap, off-heap memory (method area, permanent generation or meta space, code cache)

Insert picture description here
Generally speaking, 95% of jvm optimization is to optimize the heap area, and 5% to optimize the method area

03 thread

  • A thread is a running unit in a program. The JVM allows a program to be executed
    in parallel by multiple threads; in the HotSpot JVM, each thread is directly mapped to the native thread of the operating system.

  • When a java thread is ready for execution, a native thread of the operating system is also created at the same time. After the java thread execution terminates. The local thread will also be recycled.

  • The operating system is responsible for scheduling all threads to any available CPU. Once the local thread is initialized successfully, it will call the run() method in the java thread.

2.1 JVM system thread

  • If you use jconsole or any debugging tool, you can see that there are many threads running in the background. These background threads do not include the main thread that calls the main method and all threads created by the main thread itself;

  • These main background system threads are mainly the following in HotSpot JVM:

    • The operation of the virtual machine thread L requires the JVM to reach a safe point before it appears. The reason these operations must occur in different threads is that they all require the JVM to reach a safe point so that the heap does not change. The execution of this thread includes "stop-the-world" garbage collection, thread stack collection, thread suspension, and biased lock cancellation
    • Periodic task thread: This thread is the withdrawal of time-period events (such as interrupts), and they are generally used for the scheduled execution of periodic operations.
    • GC thread: This thread provides support for different types of garbage collection behavior in the JVM
    • Compiler thread: This kind of thread will reduce bytecode and compile to local code when it is running
    • Signal scheduling thread: This thread receives the signal and sends it to the JVM, and processes it by calling the appropriate method within it.

1. Program counter (PC register)

In the Program Counter Register in the JVM, the name of Register is derived from the register of the CPU, and the register stores on-site information related to instructions. The CPU can only run by loading data into the registers. The PC register in the JVM is an abstract simulation of the PC register in the room

Insert picture description here

1.1 Role

The PC register is used to store the address that points to the next instruction, that is, the instruction code that will be executed soon. The execution engine reads the next instruction.

  • It is a small memory space, almost negligible. It is also the fastest storage area
  • In the jvm specification, each thread has its own program counter, which is private to the thread, and the life cycle is consistent with the life cycle of the thread
  • There is only one method in a thread at any time, which is the so-called current method. The program counter stores the JVM instruction address of the java method being executed by the current thread; or, if the native method is actually executed, it is an undefined value (undefined).
  • It is an indicator of program control flow. Basic functions such as branches, loops, jumps, exception handling, and thread recovery all need to rely on this counter to complete
  • When the bytecode interpreter works, it selects the bytecode instructions that need to be executed by changing the value of this counter.
  • It is the only area that does not specify any OOM conditions in the Java virtual machine specification
1.2 Code example

Use javap -v xxx.class to decompile the bytecode file and view information such as instructions

Insert picture description here

1.3 Frequently Asked Interviews
1. What is the use of using PC registers to store bytecode instruction addresses?
Why use the PC register to record the execution address of the current thread?
  • Because the CPU needs to constantly switch between threads, after switching back at this time, you have to know where to continue executing
    . The bytecode interpreter of the JVM needs to change the value of the PC register to clarify what word should be executed next. Section code instruction
2. Why is the PC register set to thread private
  • We all know that the so-called multi-threading refers to the method of executing one of the threads within a specific period of time. The CPU will constantly switch tasks, which will inevitably lead to frequent interruptions or recovery. How to ensure that the score is correct? In order to be able to accurately record the current bytecode instruction address being executed by each thread, the best way is naturally to allocate a PC register for each thread, so that independent calculations can be performed between each thread, so that it will not appear The situation of mutual interference.
  • Due to the limitation of the CPU time slice, in the concurrent execution of many threads, at any certain moment, a processor or a core in a multi-core processor can only execute one instruction in a certain thread.
  • This will inevitably lead to frequent interruptions or recovery, how to ensure that the points are not bad? After each thread is created, it will generate its own program counter and stack frame, and the program counter does not affect each other among the threads.
CPU time slice
  • The CPU time slice is the time the CPU allocates to each program, and each thread is allocated a time period. Call it its time slice.
  • At the macro level: we can open multiple applications at the same time, and each program runs in parallel without contradiction.
  • But at the micro level: Since there is only one CPU, it can only process part of the program requirements at a time. One way to deal with fairness is to introduce time slices, and each program executes in turn.
Parallel and concurrency
  • Parallel: multiple threads execute simultaneously at the same time;
  • Concurrency: A core quickly switches multiple threads, allowing them to execute in sequence, which looks like parallel, but is actually concurrent

Guess you like

Origin blog.csdn.net/qq_43141726/article/details/114550297