Process, thread, synchronization, and mutual exclusion are not clear.

table of Contents

  • Knowledge accumulation
  • Process mutex
  • Process synchronization
  • Process communication
  • Guan Cheng
  • The difference between process and thread
  • Thread implementation

Knowledge accumulation

(Note: It is recommended that you skip this module directly for you to find when reading below)

1. The difference between parallel and concurrent

Concurrency: Two or more events occur at the same time interval
Parallel: Two or more events occur at the same time

2. User mode and core mode

User state (eye state, normal state): user program or off-core operating system, only execute non-privileged instructions
Core state (management state, special state): run the operating system kernel, execute unprivileged instructions and privileged instructions

3. Primitives

A primitive refers to a sequence of operations that perform a certain function without being divided or interrupted. The primitive cannot achieve concurrency!
Process control primitives: process establishment, process cancellation, process waiting and process wakeup.

4. The difference between multi-core CPU and multi-CPU

The difference between a multi-core CPU and a single-core multi-CPU is mainly in performance and cost. Multi-core CPUs have the best performance but the highest cost; multi-CPUs are low-cost and cheap, but have relatively poor performance, and are currently mostly multi-core CPUs.

5. Single process multithreading and multiprocess single thread

The comparison between single-process multi-threading and multi-process single-threading is faster than the latter. This is because the multi-process single-thread CPU switching is from one process to another process, while the single-process multi-thread CPU switching is Only in a process, each process | thread has its own context stack to save, and switching between processes consumes more.
Multi-process single thread also has advantages: it provides a protection mechanism, when a process (node) internal reading error, the master can let it restart, which makes other processes (including the master), such as Nginx, JAVA. In single-process multi-threading, if one thread fails, the entire process may hang, and the realization of single-process multi-threading is also very difficult.

6. What is the operating system kernel:

The operating system kernel refers to the core part of most operating systems. It consists of those parts of the operating system that are used to manage memory, files, peripherals, and system resources. The operating system kernel usually runs processes and provides communication between processes.

7. Critical resources and critical regions

  1. Critical resources
    Critical resources are shared resources that are only allowed to be used by one process at a time. Each process adopts a mutually exclusive way, and the shared resources are called critical resources. The hardware that belongs to critical resources includes printers, tape drives, etc .; the software includes message queues, variables, arrays, and buffers. The processes use mutual exclusion to realize the sharing of such resources.

  2. Critical section
    to access critical resources of each process in the code that called the critical zone (CriticalSection) , allows only one process to enter the critical section, after entering, other processes are not allowed to enter. Regardless of whether it is a hardware critical resource or a software critical resource, multiple processes must access it mutually exclusively. ** The critical section where multiple processes involve the same critical resource is called the relevant critical section. When using the critical section, it is generally not allowed to run for too long. As long as the thread running in the critical section has not left, all other threads that enter this critical section will be suspended and enter the waiting state, which affects the program Running performance.

  3. The critical section is a lightweight synchronization mechanism. Compared with the kernel synchronization objects such as mutual exclusion and events, the critical section is an object in user mode, that is, threads can only be mutually exclusive in the same process. Because there is no need to switch between the user mode and the core mode, the work efficiency is much higher than mutually exclusive. Although the critical section synchronization speed is very fast, but it can only be used to synchronize threads within the process, not to synchronize threads in multiple processes.
    Transfer from https://blog.csdn.net/weixin_41413441/article/details/80548683

Process and thread are mutually exclusive:

  1. Mutual exclusion is generated by competing resources
  2. Resource competition brings problems: deadlock problem, hunger problem
  3. The mutual exclusion of processes requires that only one visitor can access a resource at a time, which is unique and exclusive. But mutual exclusion cannot limit the order in which visitors access resources, that is, the access is out of order.
  4. The mutex is used for the mutual exclusion of the process. The locking and unlocking of the mutex must be correspondingly used by the same process, which is designed to coordinate the separate access to a shared resource.
  5. Mutually exclusive implementation methods: (1) close interrupt (2) lock (3) hardware instruction (TS, Swap) (4) semaphore (integer semaphore, record semaphore, And semaphore, semaphore set )

Process and thread synchronization:

  1. Synchronization is formed by cooperation.
  2. Certain processes require division of labor and collaboration in order to accomplish the same task. Since each process of cooperation advances independently at an unpredictable speed, this requires mutually coordinated processes to coordinate their work at certain coordination points. When one of the cooperation processes reaches the coordination point, it should block itself before it has received the message or signal sent by its partner process, until the other cooperation process sends the coordination signal or message to be awakened and continue to execute. This coordinated relationship between collaborative processes waiting for each other's messages or signals is called process synchronization. In most cases, synchronization has already achieved mutual exclusion, especially all write resources must be mutually exclusive. A few cases mean that multiple visitors can be allowed to access resources at the same time.
  3. The semaphore is used for process synchronization. The semaphore can be released by one process and obtained by another process. It is designed to control a resource with a limited number of users . The semaphore is not necessarily a certain resource, but a concept of the process, for example: there are two processes A and B. Process B needs to wait for process A to complete a certain task before performing its own steps. This task is not necessarily It is to lock a certain resource, or it can be used for some calculation or data processing.
  4. Synchronization implementation methods: (1) semaphore (recorded semaphore, And-type semaphore, semaphore set) (2) management (note: these two methods realize the right to wait)
  5. The principles that the process synchronization mechanism should follow are idle yielding, busy waiting, limited waiting, and right to wait (when the process cannot enter its own critical section, the processor should be released immediately to avoid the process falling into a "busy wait" state).

Process communication

  1. The interaction between concurrent processes must meet two basic requirements: synchronization and communication
    2. Process synchronization is a type of process communication. By modifying the semaphore, processes can establish contact to coordinate operation and coordinate work with each other. ** However, semaphore and PV operations can only transfer signals, but not the ability to transfer data. ** In some cases, the amount of information exchanged between processes is very small (only a certain status information is exchanged), but in many cases, a large amount of data needs to be exchanged between processes, for example, to transfer a batch of information or the entire file, this can be done through a A new communication mechanism to complete, the process of exchanging information between processes is called process communication IPC
  2. There are three types of advanced communications: shared storage systems (some places are called shared memory areas), messaging systems (some places are called message queues), and pipes (some places also become shared files).
  3. Comparison of process synchronization and process communication
    Synchronization is mainly critical section, mutual exclusion, semaphore
    Inter-process communication is pipeline, memory sharing, message queue, semaphore, socket

Tube:

  1. Management process definition: Management process (English: Moniters, also known as monitor) is a program structure. A management process defines a data structure and a set of operations that can be performed for concurrent processes . This set of operations can synchronize processes and Change the management data.
  2. Why introduce management: The introduction of the semaphore mechanism solves the problem of description of process synchronization, but the large number of semaphore synchronization operations scattered in each process is not easy to manage, and may lead to system deadlock. The synchronous operation of all processes on a certain critical resource is concentrated to form a so-called secretary process. Any process that wants to access this critical resource needs to report to the secretary first, and the secretary will realize the mutually exclusive use of the same critical resource by the processes.
  3. Important feature: The management process is an advanced synchronization primitive, and there can only be one active process in the management process at any time. It is a component of a programming language, so the compiler knows that they are special.
  4. The difference between pipe and pipe: pipe process realizes process synchronization, pipe realizes process communication.
  5. Tube realization: Hall tube, Hansen tube ( Java implementation reference article )

The introduction of threads

  1. Why introduce threads : Since the process is the owner of a resource, the system must pay a large space-time overhead (I / O device, memory space, PCB) during creation, undo, and switch. To increase the degree of concurrency, threads are introduced.

  2. The difference between processes and threads

    Process : A program with a certain independent function about a running activity on a certain data set. A process is an independent unit for resource allocation and scheduling by the system. A process includes at least a main thread and a worker thread.

    Thread : An entity of a process is the basic unit of CPU scheduling and dispatch . It is a basic unit that is smaller than a process and can run independently. The thread itself basically does not own system resources, but only has some essential resources in operation (Such as a program counter, a set of registers and a stack), but it can share all the resources owned by the process with other threads belonging to the same process.

    Differences :
    (1) The main difference between processes and threads is that they are different operating system resource management methods . Processes have independent address spaces. After a process crashes, it will not affect other processes in protected mode. Threads have their own stack and local variables, but there is no separate address space between threads.
    (2) The supervisor achieves process synchronization instead of thread synchronization.
    (3) Thread communication is the communication between the main thread and the worker thread or worker thread within the scope of the process. Process communication generally refers to the communication between threads of different processes. Due to the different address spaces, it is necessary to use the operating system related mechanism to "transfer ", Such as shared files, pipes, SOCKET.

Thread implementation

  1. Kernel support thread (KSP):
    advantage: the operating system controls the thread, and can achieve single-process multi-core parallel.
    Disadvantages: Thread scheduling and switching need to be performed in the core state, so user-mode threads need to be switched to the core state, which is more expensive for thread switching in the user state in the user process.

  2. User-level threads (ULT):
    Advantages : (1) The kernel is completely unaware of the existence of user-level threads, and users can create any number of threads. (2) Thread switching does not need to be converted to kernel space, saving the switching time between modes (user mode, core mode). (3) The scheduling algorithm can be process-specific and does not depend on the OS low-level scheduling algorithm. (4) Multithreading can be implemented on platforms that do not support the threading mechanism.

    Disadvantages : (1) The round-robin method is used to schedule processes as a unit (allocating time slices for processes) , while KSP uses threads as a unit. Cause uneven distribution of resources. (2) When a thread is blocked, other threads in the process will also be blocked. In KSP, other threads in the process can still run. (3) ULT is not conducive to multiprocessing by the multiprocessor. The kernel gives a process a core at a time , so only one thread in the process can execute. Before the thread gives up the CPU, other threads can only wait.

  3. Mixed Threads:
    Advantages: KST + ULT. In a combined thread, multiple threads in the same process can be executed in parallel on multiple processors at the same time, and there is no need to pay attention to the entire process when blocking a thread.
    Implementation method:
    (1) Many-to-one model (multi-user one core), the specific form is similar to ULT, only when the user thread needs to access the kernel, it is mapped to a kernel control thread.
    (2) One-to-one model (one user and one core), similar to KSP, needs to limit the number of system threads
    (3) Many-to-many model (multi-user multi-core), it can make a process Multiple threads run in parallel on a multiprocessor system, which can also reduce thread management overhead and improve efficiency like a many-to-one model

Published 16 original articles · Like1 · Visits 369

Guess you like

Origin blog.csdn.net/qq_41174940/article/details/105504666