The ins and outs of the thread, do you understand?

The process has been a little troubled recently, with a gloomy face all day, and a little absent-minded when accessing the memory.

Memory is a discerning person, and he asked straight to the point: "Process, what problems have you encountered recently? I see that you have been in a low mood recently. If you have any problems, just say it directly, and I will ask the big guys to help you find a solution together. ."

The process sighed and said, "Hey, didn't it say that the CPU single-core frequency has reached a bottleneck recently? Humans use multi-core cores to make up for the lack of single-core processor performance, and our CPUs have also been upgraded to quad-core."

"Yeah, this is a good thing. Now it can process up to 4 processes in parallel , and the efficiency is much higher than before, isn't it?" Memory asked suspiciously.

"Good or good, but every time I run on the CPU, I can't help but think, if the single-core frequency doesn't increase, won't my total running time still be the same? The applications in the future will get bigger and bigger. , more and more CPU resources are consumed, such as those large-scale game processes, which require a lot of calculations in a short period of time , what should I do if a single core can't support it. Let's talk about myself, I also want to be able to run it earlier, Rest early."

tobe Note: Obviously, the running time of a single process is reduced, but the main emphasis here is the time the process occupies the CPU.

The memory nodded and agreed: "I didn't expect this problem, multi-core processors are really not friendly to a single process. Then we have to find a way to allow you to use several cores at the same time. But I can't think of a good way for a while. Let's discuss it with everyone."

In the discussion meeting, the memory explained to everyone the problem that the process is currently encountering.

"How can a process be parallelized ?" The process scheduler was the first to ask: "I can't put a process on four cores. This is not only meaningless, but also hinders the execution of other processes."

The operating system is well-informed and said: "It is definitely impossible to run a process on several cores at a time. I am thinking that our goal is actually to let multiple cores help a process run without conflict . Well. We have to "split" the process and put it on several cores. "

While talking, the operating system drew a picture:

process split

"Look, if the two functions fun1 and fun2 are not related to each other, we can let two cores execute them at the same time, isn't this parallel?"

"You mean to split a process into several processes?"

The operating system shook its head: "It's not splitting into multiple processes. The cost of process switching is too high. Besides, these split functions share the same address space , and they can share data by nature . If they are split into processes, we We also have to consider the communication between processes , which is a lot of trouble. But in order to distinguish them from processes, let’s call them “ Threads ”.”

The process was startled and wanted to split itself into threads? Then don't you die? He hurriedly asked, "Then don't I have no room for existence?"

The process scheduler also panicked: "If there is no process, will I be retired too?"

The operating system quickly explained: You misunderstood, what I want to disassemble is the execution flow of the process. Doesn’t the process include resource ownership and execution flow ? The resource ownership is still controlled by the process, and the execution flow is divided into several threads. ,like this:

execution flow

tobe Note: In the process model, a process has control over resources such as memory, I/O channels, I/O devices, and files, which is called "resource ownership". "Execution flow" can be seen as the execution process of a process on the CPU (intuitively, it is a statement in a high-level language).

The process suddenly realized: "That is to say, I am still the controller of resources, and those threads are equivalent to helping me work?"

"Yes, and from this perspective, you are still a single- threaded process ."

After listening for so long, the memory asked: "When creating a process, I want to save the process PCB, so in order to create a thread, do I have to create a TCB (Thread Control Block)?"

"Of course, the information needed for thread switching must be stored in the TCB. But don't worry, the TCB is much smaller than the PCB, so thread switching will be much faster than process switching."

multithreaded process model

After listening to this, everyone felt that the "thread" model perfectly solved the current problem, and said, "Why don't we add a thread model to the operating system now, and solve the process problem as soon as possible."

However, the operating system showed embarrassment and said, "The threading model is just an assumption of ours. If it is added rashly, problems may occur, and the system will crash. Let's try, or let's create a thread library first , and rely on a user-level application, the thread scheduler, to manage these threads."

The process asked inexplicably: "But in this case, I am still assigned to a separate core, even if it is multi-threaded, it can only run on a single core. Besides, if one of these threads is blocked, it will run on a single core. It seems to you that the entire process is blocked , and other threads, even in the ready state, cannot get CPU resources."

The operating system thought about it carefully and said: "No way, user-level threads do have these two shortcomings, but compared to letting the kernel implement threads, user-level threads also have his advantages - thread switching does not require me to perform state transitions. (From user mode to kernel mode), the overhead is small, in addition, the thread library can have multiple scheduling algorithms, which can tailor the scheduling algorithm for the application."

tobe Note: There is a solution to thread blocking called jacketing , which can convert a blocking system call into a non-blocking system call. For example, instead of directly calling system-level I/O routines, the thread Call the application-level I/O jacket routine. This jacket routine will check whether the I/O device is busy. If it is busy, it will not perform the I/O operation and instead schedule other threads to avoid waiting for the I/O device. and cause the process to block.


User-level threads were quickly put into use, the pthread (POSIX thread) library in Linux systems was a big success, and the operating system made a major decision to support kernel-level threads.

Kernel-level threads solve the problem of process parallelism. In addition, because the kernel can see the existence of threads, if a thread is blocked, other threads in the same process can still run.

User-level threads and kernel-level threads

The problem of parallelism is solved, and the process expresses that it is very happy.


Hope you get something after reading my article.

Thanks for reading, we'll have an appointment later!

Disclaimer: Original article, unauthorized reproduction is prohibited

{{o.name}}
{{m.name}}

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324123754&siteId=291194637