C++ process, thread

Great God link
https://www.cnblogs.com/eilearn/p/9414569.html
https://blog.csdn.net/zhouchunyue/article/details/79271869

process

Narrow definition : A process is the execution process of a program.

Broad definition : A process is a running activity of a certain data set of a program with a certain independent function. It is the basic unit of dynamic execution of the operating system. In the traditional operating system, the process is both the basic allocation unit and the basic execution unit.

Simply speaking, the concept of process has two main points:

  • First, the process is an entity. Each process has its own address space, in general, including text region (text region), data region (data region) and stack (stack region). The text area stores the code executed by the processor; the data area stores variables and dynamically allocated memory used during process execution; the stack area stores instructions and local variables for active procedure calls.
  • Second, the process is a "program in progress". A program is an inanimate entity. Only when the processor gives life to the program can it become an active entity. We call it a process.

Process status : ready, running, and blocked. The ready state actually means that all resources outside the cpu have been acquired, and it can be executed immediately as long as the processor allocates resources. There are queuing sequences in the ready state, and the queuing principle will not be repeated. The running state is to obtain the resources allocated by the processor, and the program starts to execute. Blocking state, when the program conditions are not enough, you need to wait for the conditions to be met before it can be executed, such as waiting for i/o operation, the state at the moment is called the blocking state.

program

Speaking of processes, one has to talk about procedures.
First look at the definition: a program is an ordered collection of instructions and data, which has no operational meaning in itself, and is a static concept. The process is an execution process on the processor, and it is a dynamic concept. This is not difficult to understand. In fact, the process contains the program, and the execution of the process cannot be separated from the program. The text area in the process is the code area, that is, the program.

Thread

Usually a process can contain several threads, of course, there is at least one thread in a process, otherwise there is no meaning.
Threads can use the resources owned by the process. In operating systems that introduce threads, the process is usually used as the basic unit for allocating resources, and the thread is used as the basic unit for independent operation and independent scheduling. Because the thread is smaller than the process, the basic The system does not have system resources, so the cost of its scheduling will be much smaller, which can more efficiently improve the degree of concurrent execution of multiple programs in the system.

Multithreading

In a program, these independently running program fragments are called "threads", and the concept of programming with them is called "multithreading." Multi-threading is to complete multiple tasks simultaneously, not to improve operating efficiency, but to improve the efficiency of resource use to improve the efficiency of the system. Threads are implemented when multiple tasks need to be completed at the same time.

The simplest analogy of multithreading is like each carriage of a train, and the process is a train. A carriage cannot run away from the train, and it is also impossible for a train to have only one carriage. The emergence of multithreading is to improve efficiency.

The difference between process and thread

The main difference between processes and threads is that they are different operating system resource management methods.
A process has an independent address space. After a process crashes, it will not affect other processes in the protected mode, and a thread is just a different execution path in a process.
Threads have their own stacks and local variables, but there is no separate address space between threads. The death of one thread means the death of the entire process. Therefore, a multi-process program is more robust than a multi-threaded program, but it costs more when switching between processes. The resources are larger and the efficiency is worse. But for some concurrent operations that require simultaneous execution and share certain variables, only threads can be used, not processes.

  1. In short, a program has at least one process, and a process has at least one thread.
  2. The division scale of threads is smaller than that of processes, which makes multithreaded programs have high concurrency.
  3. In addition, the process has an independent memory unit during execution, and multiple threads share memory, which greatly improves the efficiency of the program.
  4. Threads are still different from processes in the execution process. Each independent thread has a program entry, sequential execution sequence, and program exit. However, threads cannot be executed independently, and must be stored in the application program, and the application program provides multiple thread execution control.
  5. From a logical point of view, the meaning of multithreading is that in an application, multiple execution parts can be executed at the same time. However, the operating system does not regard multiple threads as multiple independent applications to implement process scheduling and management and resource allocation. This is an important difference between processes and threads.

Process and thread selection

We usually use more threads when writing code, such as those that need to be created and destroyed frequently, have to deal with a large amount of calculations and data, and have a good display interface and timely response to messages. Multithreading is preferred because of these operations. Will consume a lot of CPU, common algorithm processing and image processing. There are also some operations that allow concurrency and may block, and multithreading is also recommended. For example, SOCKET, disk operations, etc.
The process is generally more stable, and it is memory isolated. The exception of a single process will not cause the entire application to crash, which is convenient for debugging. Like many servers, the process mode is used by default.

Communication between threads

One is to use global variables for communication, and the other is to use a custom message mechanism to transfer information.
In fact, because each thread shares the resources of the process, it does not have a communication method for data exchange as in process communication. The main purpose of communication is for thread synchronization, so it is like some mutexes and critical sections. Ah CEvent event object and semaphore object can realize thread communication and synchronization.

Communication between processes

The communication methods between processes include PIPE pipes, semaphores, message queues, shared memory, and communication through sockets. According to the amount of information, it can be divided into low-level communication and high-level communication. In terms of choice, if the user transmits less information. Or it is necessary to trigger certain behaviors through signals, which can generally be solved by the signal mechanism. If the amount of information required to be transmitted between processes is relatively large or there is a requirement to exchange data, then communication methods such as shared memory and sockets must be used. .

Term explanation :

  • A pipe is actually a special file that exists in memory. It does not belong to the file system and has its own data structure. According to the scope of use, it can be divided into unnamed pipes and named pipes.
  • Shared memory is realized by directly attaching the shared memory buffer to the virtual address space of the process. It uses the memory buffer to directly exchange information without copying. It is fast and has a large amount of information.
  • Message queue buffering is a function called by the system to realize the synchronization between message sending and receiving. It allows any process to realize inter-process communication by sharing the message queue. However, the copying of information requires a large amount of CPU, so it is not suitable for occasions with a large amount of information or frequent operations.

Thread synchronization and thread asynchrony

Synchronization means that a thread waits for another thread to finish executing before starting to execute the current thread.
Asynchronous means that one thread executes, and its next thread does not have to wait for it to finish executing before starting execution.

Generally, the relationship between multiple unrelated threads started by a process is asynchronous. For example, the game has images and background music. The image is operated by the player and the background music is played in a loop by the system. There is nothing between the two threads. It's all about different things, and this is thread asynchrony. As for synchronization, it refers to multiple threads operating one data at the same time. At this time, you need to add protection to the data. This protection is the synchronization of threads.
Synchronization use scenarios : when multiple threads access a piece of data at the same time, synchronization must be used, otherwise it may occur Insecure situations, there is a situation that does not require synchronization technology, that is, atomic operations, that is to say, the operating system guarantees at the bottom that the operations are either all done or not done.
Asynchronous usage scenario : when only one thread accesses the current data. For example, the observer mode does not have a shared area. After the theme changes, the observer is notified to update, and the theme continues to do its own thing, without waiting for the observer to update before working.

There are several ways to achieve multi-thread synchronization and mutual exclusion

If threads are synchronized, there are critical sections, mutexes, semaphores, and events.

Critical section is suitable for use when multiple threads in a process access common areas or code segments.

Mutex can be named, which means that it can be used when multiple threads in different processes access common resources. Therefore, if it is used within the process in terms of selection, using a critical section will bring speed advantages and reduce resource usage.

Semaphore is different from critical section and mutex. It allows multiple threads to access common resources at the same time. It is equivalent to the PV operation of the operating system. It sets a maximum number of threads in advance. If the number of threads reaches the maximum, then Other threads can no longer come in. If some threads release resources, then other threads can come in and access resources.

The event is to keep threads synchronized by means of notification operations.

Note: Mutex, event, and semaphore are all kernel objects and can be used across processes.

Deadlock

Concept: Permanent blockage caused by communication between processes or competition for system resources, if there is no external force, it will always be in a deadlock state.

cause:

  1. Insufficient system resources;
  2. Different processes and speeds may also lead to deadlocks;
  3. Misallocation of resources;

Four necessary conditions for deadlock:

  1. Mutually exclusive conditions: that is, a resource can only be used by one process at a time.
  2. Request and hold conditions: When a process is blocked by requesting other resources, it keeps holding on to the resources it has already obtained.
  3. Non-deprivation conditions: The resources that the process has acquired cannot be forcibly deprived before they are used up.
  4. Circular waiting condition: A kind of circular waiting resource relationship is formed between several processes.

Methods to prevent and avoid deadlocks: in system design and process scheduling, pay attention to the four necessary conditions for not allowing deadlocks to be established, determine a reasonable resource allocation algorithm, avoid processes occupying system resources forever, and allocate resources reasonably Planning.

Guess you like

Origin blog.csdn.net/qq_24649627/article/details/112237395