process thread operating system

• Process status

insert image description here
insert image description here

The difference between process and thread

1. Fundamental difference: a process is the smallest unit for resource allocation by the operating system, and a thread is the smallest unit for operation scheduling by the operating system.

2. The affiliation is different: the process contains threads, and the threads belong to the process.

3. The overhead is different: the overhead of creating, destroying, and switching processes is much greater than that of threads.

4. Have different resources: each process has its own memory and resources, and threads in a process will share these memory and resources.

5. The ability to control and influence is different: the child process cannot affect the parent process, but the child thread can affect the parent thread. If the main thread is abnormal, it will affect its process and child thread.

6. The CPU utilization rate is different: the CPU utilization rate of the process is low, because the context switching overhead is large, and the CPU utilization rate of the thread is high, and the context switching speed is fast.

7. The operators are different: the operator of the process is generally the operating system, and the operator of the thread is generally the programmer

Process Definition and Composition

o A classic definition is an instance of an executing program Every program in the system runs in the context of a process
 Context
A context is the state that a program needs to run correctly, including the program stored in memory Code and data, its stack, register contents, program counter, environment variables, collection of open file descriptors
insert image description here

insert image description here

thread

o A thread is a logical flow running in a single process context
 A program has at least one process, and a process has at least one thread
. Address space shared data
o Threads occupy much less resources than processes, the CPU creates and switches threads less overhead than processes
o Communication between threads is convenient, under the same process, threads share global variables, static variables and other data? ?
o

Process overhead user mode-kernel mode-user mode

• The difference between multithreading and multiprocessing

o Data sharing and synchronization
 Multi-process
• Data is separated, sharing is complicated, IPC is required, synchronization is simple
 Multi-threading
• Multi-threading shares process data; sharing is simple, synchronization is complex
o Memory usage
 Multi-process
• Memory usage is large, switching Complexity, low CPU utilization
 Multi-threading
• Small memory footprint, simple switching, high CPU utilization
o Create and destroy switching
 Multi-process
• Process creation, destruction, and switching are complex and slow
 Multi-threading
• Simple, fast
o Programming and debugging
 Multi-process
• Simple programming, easy debugging
 Multi-threading
• Complex programming, complex debugging
o Reliability
 Multi-process
• The death of one process will not affect other processes (with independent address space) ROS One node corresponds to one process.
 Multi-threading
• If one thread dies, the whole process will die (shared address space), not easy to maintain
o Distributed
 Multi-process
• Suitable for multi-core, multi-machine distribution; if one machine is not enough, it is easier to expand to multiple machines
 Multi-threading
• Adapt to multi-core distribution
o Use multi-threading for frequent creation, switching and destruction, multi-threading for sharing certain variables, and multi-process for high security requirements

• Thread synchronization

o When a thread operates on memory shared data, no other thread can operate on this memory address until the thread completes the operation
o Mutual Exclusion
 mutex uses mutex to protect the critical area to prevent race conditions from appearing. When a thread cannot acquire the mutex, the thread will be suspended. When other threads release the mutex, the operating system will wake up the thread suspended on the lock and let it run.
For example, when accessing global variables, you need to lock them and unlock them after the operation. Conditional locks
are conditional variables, which are used for waiting instead of locking. Usually, conditional variables and mutexes are used at the same time, and the conditions are determined by mutual exclusion. Quantity protection, one thread waits for the condition of the condition variable to be established and hangs, and another thread makes the condition to be established
o spin lock
 If the spin lock is already held by other threads, the calling thread will not be blocked, and it will keep looping to check whether The spin lock has been released, and when the time spent by the CPU holding the lock is relatively short, use the spin lock
o Read-write lock, optimistic lock, pessimistic lock
 Suitable for reading more and writing less; optimistic lock thinks that other threads will not modify data It will not be locked, and the updated data is judged by the version number of the data. Pessimistic lock means that other threads will modify the data, so every time a thread acquires data, it will lock
o semaphore
 semphare allows multiple threads to access the same resource at the same time, but needs to control the maximum number of threads that access this resource at the same time
o event ( signal)
 wait/notify, which allows a thread to actively wake up another thread to execute the task after processing a task, and can also implement the comparison operation of multi-thread priority

• Interprocess communication

o pipeline pipe
 It can only be used for communication between parent-child or sibling processes
 The pipeline is half-duplex, and data can only flow in one direction; when communication between the two parties is required, two pipelines need to be established
 What a process writes to the pipeline Read by the process at the other end of the pipeline, the written content is added to the end of the pipeline buffer each time and reads data from the head of the buffer each time
o Named pipe FIFO
 In addition to the function of the pipe, It also allows communication between unrelated processes; the name of the named pipe is located in the file system, and the content is stored in memory
o Message queue message
 The message queue is a linked list of messages, which overcomes the shortcomings of the limited semaphore in the above two communication methods , processes with read and write permissions can add/read messages to the message queue according to certain rules; stored in the kernel: one or more processes are allowed to operate, messages are not necessarily read in accordance with first in first out, but can also be read in accordance with message types Take
o signal signal
 It is used to notify the receiving process that an event has occurred. The signal can be sent to a process at any time without knowing the process status. If the process is not currently executing, it will be saved by the kernel until the process resumes execution and is delivered Give it
o Shared memory
 The most commonly used inter-process communication method, so that multiple processes can access the same memory space, relying on mutex and semaphore synchronization
o Semaphore
 Mainly used as synchronization and synchronization between processes and threads Mutual exclusion means; initialization operation: PV operation
• P operation
o semaphore -1, check whether it is less than 0, if it is less than, the process enters the blocked state
• V operation
o semaphore +1, if it is less than or equal to 0, wake up a waiter from the queue The process enters the ready state
o socket socket
 More general inter-process communication mechanism, inter-process communication between different machines in the network

• Process synchronization mechanism

o Atomic operation (off interrupt - operation - open interrupt) microkernel: clock management, primitives
o semaphore operation
o spin lock management
o rendezvous, distributed system
• process state

o Waiting state/blocking state/sleeping state
 The process does not have the running conditions, is waiting to use resources or an event occurs, such as peripheral transmission, manual intervention
• Waiting –> Ready
o Resources are satisfied or an event has occurred, such as peripherals The transmission is over; manual intervention is completed
o Ready state
 Have the running conditions, waiting for the system to allocate processors to run, a process is in the ready state after creation, it can change from waiting state to waiting for an event to happen, or the running time slice of the running state reaches / A higher priority process appears
• Ready –> Running
o When the CPU is idle, a ready process is selected for execution by scheduling
o Running state
Occupying the processor is running
• Running –> Waiting
o Waiting for resources or an event to occur, such as waiting for peripheral transmission , waiting for manual intervention
• Running –> Ready
o The running time slice is up, or there is a higher priority process
• Process scheduling strategy
o
o
o
o FCFS (first come, first served), queue implementation, non-preemptive, first request CPU Process is allocated to CPU first
o SJF (Shortest Job First Scheduling Algorithm)
 The average waiting time is the shortest,
• Deadlock
o In two or more concurrent processes, if each process holds a certain resource and waits for the other process to release it now The resources kept cannot move forward until this state is changed, that is, two or more processes are blocked indefinitely, a state of waiting for each other
o Necessary conditions
 Mutual exclusion
ie only one process at a time Use, if other applications need to wait for the process to be released
 Occupy and wait
• A process must occupy at least one resource and wait for another resource that is occupied by another process
 Non-preemption
• A resource that has been allocated to a process cannot be forcibly preempted, and the resource can only be used by the process upon completion Voluntary release after task
 Circular waiting
• A head-to-tail circular waiting resource relationship is formed among several processes, and each process in this loop is waiting for the resources occupied by the next process
o Processing method
 Ostrich strategy
• Solution The cost of deadlock is very high. Ignore deadlock directly. If deadlock occurs, it will not have much impact on users, or the probability of deadlock is very low. 
Deadlock prevention
• Destroy one of the four necessary conditions for deadlock formation
Deadlock avoidance
・Dynamic monitoring of resource allocation status to ensure that the circular waiting condition does not hold, thus ensuring that the system is in a safe state, that is, the system can allocate resources to each process in a certain
order DEADLOCK RELIEF
** Terminate Processes, **simply terminate one or more processes to break the loop; Resource Preemption, suspend some process and preempt its resources

• Exception
o Interrupt
o Trap
o Fault
o Termination

• IO multiplexing

o A single process/thread can handle multiple IO requests at the same time. Users add the file descriptors they want to monitor to the select/poll and epoll functions, which are monitored by the kernel. Functions are blocked. Once there are file descriptors ready to read and write, or timeout , the function will return, and then the process can perform corresponding read and write operations
o select
 Put the file descriptor into the collection, when calling select, copy this collection from the user space to the kernel space, the overhead is high
o poll
o epoll

• Segmentation and pagination

o Segmented storage management
o Paged storage management
o Differences
 Different purposes
• Paging is due to the needs of system management rather than the needs of users. It is the physical unit of information; the purpose of segmentation is to better meet the needs of users Need, it is a logical unit of information, it contains a group of information whose meaning is relatively complete;
 Different sizes
• The size of a page is fixed and determined by the system, while the length of a segment is not fixed and is determined by the function it completes
 Address space Different
• Segments provide users with two-dimensional address space, pages provide users with one-dimensional address space
 Information sharing
• Segment is a logical unit of information, which facilitates storage protection and information sharing; page protection and sharing are limited
 Memory fragmentation
• Pages The advantage of traditional storage management is that there is no external fragmentation (because the size of the page is fixed), but it will generate internal fragmentation (a page may not be full)
. Inner fragments) but when the segment is swapped in and out, outer fragments will be generated

• Page replacement algorithm

o FIFO
o LRU, Least Recently Used Algorithm
o LFU Least Used Algorithm
o OPT Optimal Replacement Algorithm
• Synchronous Asynchronous Blocking Non-blocking
• Function Library and System Call
• CPU Memory Architecture and Working Principle
o Control Unit
o Logic Unit
o Storage Unit
• Coroutines
• Orphan process and zombie process
• What is a daemon process, how to create a daemon process
• The bootloader kernel process from startup to final loading
• Linux file management
• How to design a memory pool

Guess you like

Origin blog.csdn.net/qq_46084757/article/details/127066007