What is the thread, concurrent -JUC concurrent series (2)

review

Recalling the previous article, we introduced the modern computer models, CPU cache coherency protocol, CPU and memory works, which are aimed at better knowledge of our Java to learn concurrent programming.

Introduction

In this paper, we come to understand a concept, what is the thread? Java threads in the thread and computers What is the difference?

What is the thread

Modern operating systems when running a program that will create a process. For example, start a Java program, web pages, software applications, operating system creates a process. Modern atomic operation system CPU scheduling is the thread, also called lightweight processes (Light Weight Process), you can create multiple threads in a process, these threads have their own stack, local variables, counters and other property, and can access to shared memory variables. High-speed switching processors on these threads, so we feel these concurrent execution threads at the same time.

Process is the basic unit of resource allocation systems, the thread is the basic unit of CPU scheduling, a process that contains at least one thread of execution (main thread), thread attached to them in the process, each thread has a set of registers (save the variables in the current thread) stack (recording execution history, where each frame saved the process but did not return a call already), a program counter (recording operation to be performed next instruction)

CPU time slice will provide a thread to execute the code block, then quickly switch between threads to achieve concurrent.

Need to know that we in the JVM Thread is not directly operate the CPU, JVM is dependent on the underlying operating system, and therefore will bring a concept, thread type.

### operating system space

  • Kernel space
    • The core system, the underlying process
  • User space
    • JVM
    • Application eclipse
    • Video Player

Thread type

  • User-level threads (User-Level Thread)
  • Line kernel threads (Kernel-Level Thread)

Thread-level type

CPU level

Intel's CPU privilege level is divided into four levels: RING0, RING1, RING2, RING3. Windows uses only one of the two levels RING0 and RING3, RING0 only to the operating system used, RING3 anyone can use. If the application attempts to execute RING0 general instructions, then Windows will display "Illegal Instruction" error message. In user space, JVM will create a level ULT thread, can only have Ring3 level permissions.

The Ring0 level ULT is unable to call the operation, as to why this division?

Departure for security reasons, if any ULT can to operate CPU, has Ring0 level, that JVM thread to wanton attack, modify the instructions of other processes, data. It can lead to security problems. Without limitation, that the instruction inside the kernel can be modified viruses are free of implantation.

### thread scheduling

If the JVM needs to generate a kernel-level threads, it can be how to operate?

KLT level can go to create a thread by calling kernel system call interface space provided (JNI).

After creating the KLT-level threads, before they can go using the CPU, it will be allocated time slots.

Users thread

That does not require kernel support implemented in the user program threads, it does not depend on the operating system kernel, using the threads library provides application process creation, synchronization, thread scheduling, and management functions to control user thread. In addition, the user threads are created and managed by an application process using the threads library, independent of the operating system kernel. It does not require user mode / kernel mode switching (context switch), speed. The operating system kernel does not know the existence of multi-threaded, so a thread blocks would make the whole process (including all its threads) blocked. Since the processor time slice allocation process here is the basic unit, so the time of each thread of execution relative reduction.

Kernel threads

All thread management operations are done by the operating system kernel. The state of preservation kernel threads and context information, when a thread is executing the system caused by blocking call, the kernel can schedule other threads of the process. On a multiprocessor system, the kernel can assign multiple threads belonging to the same processes running on multiple processors to improve the degree of parallelism process execution. Because creating, scheduling, and management needs to complete kernel threads, user-level threads and so these operations compared to much slower, but still faster than the process of creating and managing operations. Most operating systems on the market, such as Windows, Linux are all supported kernel-level threads.

It follows the principle of distinction

We look KLT, each thread in the process, all attached to the kernel, the kernel will have to thread a correspondence table, it can be understood as a small lightweight process, thread corresponding to that particular user's space specific tasks, but also has Ring0 CPU privilege level.

Java was created in which the level of the thread?

  • 1.2, create a ULT
  • After 1.2, create a KLT
private native void start0();
复制代码

Java threads and kernel threads Relations

After you create a JVM thread scheduler will go through a library call, to generate a kernel thread in kernel space and kernel space thread table relationships, make one mapping correspondence.

Java to create a thread

  1. new java.lang.Thread().start()
  2. Using JNI to a native thread attach to the JVM

For new java.lang.Thread (). Start () this way, only calling start () method when the only real in

The JVM to create a thread, the main steps of the life cycle

  1. Creating instance of the corresponding JavaThread
  2. Creating instance of the corresponding OSThread
  3. Create a native thread the actual underlying operating system
  4. JVM prepare the appropriate state, such as memory allocation ThreadLocal
  5. Underlying native thread starts to run, the run Object java.lang.Thread generated call () method
  6. When run Object of the generated java.lang.Thread () returns after the method finishes, or throw an exception termination, termination native thread
  7. JVM thread release resources related to clear the corresponding JavaThread and OSThread

JNI for a native thread attach to the JVM, the main steps

  1. Connected to the application executed by the JVM instance JNI call AttachCurrentThread
  2. JVM create the appropriate objects JavaThread and OSThread
  3. Create a corresponding object java.lang.Thread
  4. Once java.lang.Thread of Object created, JNI you can call Java code
  5. After through JNI call DetachCurrentThread, JNI to disconnect from JVM instance
  6. JVM clear the appropriate JavaThread, OSThread, java.lang.Thread objects

Java thread life cycle

As shown below

Why use concurrency? Concurrent what would be the problem?

Why use concurrent

In fact, the nature of concurrent programming is the use of multi-threading technology, in the context of modern multi-core CPU, and spawned a trend of concurrent programming, you can play in the form of concurrent programming multi-core CPU computing power to the extreme, performance has been improved. In addition, the face of complex business model, parallel program will be more responsive to business needs than a serial program, and concurrent programming to better fit this business split.

Even the single-core processor also supports multi-threaded code execution, CPU mechanism to achieve this by allocating CPU time slice to each thread. CPU time slice is assigned to each thread of time, because time is very short piece, so the CPU by constantly switching threads of execution, makes us feel multiple threads are executing simultaneously, the time slice is typically tens of milliseconds (ms).

Concurrency is not equal parallel: Concurrency refers to the number of tasks alternately, in parallel is referred to as "simultaneous" in the true sense. In fact, if there is only one CPU, while the use of multiple threads, you can not parallel a real system environment within the system can only be switched by means of alternating time slices, and become concurrent execution of tasks. True parallel can only occur in a system with multiple CPU.

Concurrent benefits

  1. Using the calculated full capacity of the multi-core CPU;
  2. Facilitate service splitter, improve application performance;

Concurrency issues arising

  • Highly concurrent scenario, leading to frequent context switching
  • Critical section thread-safety issues, prone to deadlock, deadlock will result in the system not available
  • other

It will switch to the next task allocation algorithm by the CPU cycles to perform the task time slice, a time slice of the current task execution. However, before the switch will save the state of a task, so that the next state to switch back to the task, you can then load this task. So the task is to save a context switch from the process of reloading.

Thread context switching procedure:

Context switching

image.png

Linux kernel code and data structures reserved for a few frames, these pages will never be rolled out to disk. Referenced (i.e., user space) from 0x00000000 to 0xc0000000 (PAGE_OFFSET) linear address by user code and kernel code. From 0xc0000000 (PAGE_OFFSET) 0xFFFFFFFFF to the linear address can only be accessed (i.e., kernel space) by the kernel code. The kernel code and data structures must be located in this 1 GB of address space, but for this address space, the greater the consumer virtual physical address mapping.

This means that 4 GB of memory, only 3 GB can be used for user applications. A process can only run in user mode (usermode) or kernel mode (kernelmode). User program running in user mode, and system calls run in kernel mode. In both ways a different stack used: using a general user mode stack, and the kernel mode with a fixed size stack (typically a memory page size).

Each process has its own 3 G user space, 1GB of kernel space they share. When a process enters the kernel space from user space, it no longer has its own process space up. This is why we often say that thread context switching involves switching to user mode to kernel mode reason lies.

FIG above example, to introduce, CPU context switching

first step

A thread to apply time slicing A, the relevant service logic execution, when the time arrives, the CPU carton time slice execution thread B B

A thread at this time needs to be stored in a temporary intermediate state, in order to later continue.

The results will be executed by the CPU registers ---> Cache -> by bus bus (cache coherency protocol) written back to main memory.

Some intermediate state will be stored in the kernel space of the main memory, called Tss任务状态段local storing program instructions, program pointer, the intermediate data and the like.

The second step

Execution time slice B, continue to point to the time after the execution of the thread A sheet A.

The CPU time would like to re-memory loadintermediate results of the program on a time slice instruction execution, a program pointer, intermediate data.

Then resume execution of a logical thread A.

#summary

This article describes what knowledge is relevant threads, concurrency, context switching, and hope for your help.

Guess you like

Origin juejin.im/post/5dcbce9e6fb9a0601f3f2f9c