Operating system study notes-process management

Process management

1. Processes and threads

1.1. The concept and characteristics of the process

1.1.1. Concept of process

In a multi-program environment, multiple programs are allowed to execute concurrently. At this time, they will lose their closedness and have the characteristics of discontinuity and non-reproducibility. To this end, the concept of process is introduced in order to better describe and control the concurrent execution of the process, and realize the concurrency and sharing of the operating system (the two most basic characteristics) .

In order to enable the processes (including data) participating in concurrent execution to run independently, a special data structure must be configured for it, called the process control block (PCB) . The system uses PCB to describe the basic situation and running status of the process, and then controls and manages the process. Correspondingly, the process entity (process image) is composed of three parts: program segment, related data segment and PCB . The so-called creation process is essentially the creation of the PCB in the process entity; further, the cancellation process is the cancellation of the PCB.

PCB is the only sign that the process exists! ! !

A process can be defined as: a process is the running process of a process entity and an independent unit for the system to allocate and schedule resources .

1.1.2, the characteristics of the process

  • Dynamic
  • Concurrency
  • Independence
  • Asynchrony
  • Structural

1.2, the state and transition of the process

Insert picture description here

  • Ready → Run : The process is scheduled to obtain processor resources (dispatch processor time slice)
  • Run → Ready : The process is runnable at time slice expires later, had to give up the processor, to become ready state. In addition, in a deprived operating system, when a higher priority process is ready , the scheduler converts the executing process to the ready state, allowing the higher priority process to execute
  • Run → Block : When a process requests the use and allocation of a certain resource (such as peripherals) or waits for an event (such as the completion of an I/O operation) , it changes from the running state to the blocking state. The process requests the operating system to provide services in the form of system calls. This is a special form in which the operating system kernel processes are called by running user-mode programs. Active behavior
  • Blocking → Ready : The event waiting for the process to complete, such as the end of the I/O operation or the end of the interrupt , the interrupt program must change the state of the corresponding process from the blocked state to the ready state . Passive behavior

1.3, process communication

  • Shared storage

    Need to use synchronous mutual exclusion tools (such as P, V operation)

  • Messaging

    Direct messaging and indirect messaging

  • Pipeline communication

    Pipeline communication is a special way of message passing. The so-called "pipe" refers to a shared file used to connect a reading process and a writing process to realize the communication between them, also known as a pipe file . The pipeline mechanism must provide three coordination capabilities: mutual exclusion, synchronization, and confirmation of the existence of each other .

    In essence, a pipe is also a kind of file, but it is essentially different from a file. The pipe can overcome the two problems of using files for communication: (1) Limit the size of the pipe. In fact, the pipe is a fixed size. Buffer . (2) The reading process works faster than the writing process.

    Note: Reading data from the pipe is a one-time operation. Once the data is read, it disappears from the pipe . The pipeline only realizes half-duplex communication .

1.4, the concept of threads and multithreading model

1.4.1, the concept of threads

The purpose of introducing processes is to better enable the concurrent execution of multiple programs, improve resource utilization and system throughput, and increase the degree of concurrency; and the purpose of introducing threads is to reduce the time and space overhead paid by the program during concurrent execution and improve operations Concurrent performance of the system .

Thread is a basic CPU execution unit and the smallest unit of program execution flow. It consists of thread ID, program counter, register set and stack . A thread is an entity in the process, and is the basic unit independently scheduled and dispatched by the system . The thread itself does not own system resources, but only has a few essential resources in operation, but it can share ownership with other threads in the process All resources . A thread can create and cancel another thread, and multiple threads can execute concurrently in a process .

1.4.2, process and thread comparison

Comparative aspects process Introduce threads
Scheduling No thread is introduced, the process is the basic unit of resources and independent scheduling After the thread is introduced, the thread is the basic unit of independent scheduling, and the process is the basic unit of resources
Own resources Processes are the basic units of resources Processes are the basic units of resources
Concurrency Concurrent execution between processes Not only can the process be executed concurrently, but the threads in the process can also be executed concurrently
System overhead Big small
Address space and other resources The address space of the process is independent of each other and does not affect each other Threads in the same process share the address space of the process, and threads in a process are invisible to other processes
communication The communication between processes needs the assistance of synchronization and mutual exclusion means The communication between threads can directly read/write the process data segment (such as global variables) for communication

1.4.3, thread implementation and multi-thread model

The realization of threads can be divided into: user-level threads and kernel-level threads . Kernel-level threads are also called kernel-supported threads.

In user-level threads, all work related to thread management is done by the application, and the kernel is not aware of the existence of threads. In kernel-level threads, all the work of thread management is done by the kernel. There is no code for thread management in the application, and there is only a programming interface to the kernel-level thread.
Insert picture description here


2. Processor scheduling

2.1, the concept of scheduling

2.1.1, the basic concept of scheduling

The scheduling of the processor is to allocate the processor, that is, select a process from the ready queue according to a certain algorithm (fair and efficient) and allocate the processor to it, so as to realize the concurrent execution of the process .

Processor scheduling is the foundation of a multi-program operating system and the core issue of operating system design.

2.1.2, the level of scheduling

From submission to completion, a job often goes through three stages of scheduling:

  1. Job scheduling . Also known as advanced scheduling , its main task is to select one (or more) jobs from the jobs in the backup state on the external memory according to certain principles, and allocate necessary resources such as memory, input/output devices to it(s), and Establish a corresponding process to enable them to obtain the right to compete for the processor . In short, job scheduling is the scheduling between memory and auxiliary storage.
  2. Intermediate scheduling . Also known as memory scheduling , its role is to improve memory utilization and system throughput . For this reason, the temporarily unable to run processes are transferred to the external memory to wait, and the process state at this time is called the suspended state. When they have the running conditions, the intermediate scheduling decides to transfer the process on the external memory to the memory again .
  3. Process scheduling . Also known as low-level scheduling , its main task is to select a process from the ready queue according to certain methods and strategies, and assign the processor to it .

2.1.3 Timing, handover and process of scheduling

The process scheduling and switching program is the kernel program of the operating system.

In modern operating systems, there are several situations where process scheduling and switching cannot be performed:

  1. In the process of handling interrupts.
  2. The process is in the critical section of the operating system kernel program. After entering the critical section, it needs exclusive access to shared resources, and theoretically it must be locked.
  3. During other atomic operations that need to completely shield interrupts.

2.1.4, the method of process scheduling

There are usually the following two scheduling methods:

  1. Non-deprivation scheduling;
  2. Deprivation scheduling.

2.1.5, the basic rules of scheduling

  1. CPU utilization

  2. System throughput

  3. Turnaround time. Refers to the time from when the job is submitted to the completion of the job.
    Turnaround time = homework completion time-homework submission time turnaround time = homework completion time-homework submission timeWeek transfer time between=For the industry completed to the time betweenFor the industry to mention post time between

    Average turnaround time = (homework 1 turnaround time + homework 2 turnaround time +... Homework n turnaround time) / n average turnaround time = (homework 1 turnaround time + homework 2 turnaround time + ... homework n turnaround time)/n Ping all week turn time between=( For industry 1 Zhou turn when Inter+For industry 2 Zhou turn when Inter+. . . For industry n circumferential rotation when Room ) / n

    Weighted turnaround time = job turnaround time/work actual running time Weighted turnaround time = job turnaround time/work actual running time With the right week turn time between=For Industry Week transfer time between / as industry real occasion operational line when between

    Average weighted turnaround time = (homework 1 weighted turnaround time + homework 2 weighted turnaround time +... Homework n weighted turnaround time) / n average weighted turnaround time = (homework 1 weighted turnaround time + homework 2 Right turnaround time + ... homework n weighted turnaround time)/n Level are with the right week turn time between=( For industry 1 with the right week turn time between+For industry 2 with the right to weeks turn when Inter+. . . For industry n with the right circumferential rotation when Room ) / n

  4. waiting time

  5. Response time

2.2, typical scheduling algorithm

  • First come first served (FCFS) scheduling algorithm
  • Short job priority (SJF) scheduling algorithm
  • Priority scheduling algorithm
  • High response ratio priority scheduling algorithm
  • Time slice round-robin scheduling algorithm
  • Multi-level feedback queue scheduling algorithm

3. Process synchronization

3.1, the basic concept of process synchronization

3.1.1, critical resources

Although multiple processes can share various resources in the system, many of these resources can only be used by one process at a time . The resources that are only allowed to be used by one process at a time are called critical resources . Many physical devices are critical resources, such as printers; in addition, there are many variables, data, etc. that can be shared by several processes, and are also critical resources .

Access to critical resources must be mutually exclusive. In each process, the code that accesses the critical resource is called the critical section. To ensure the correct use of the critical section, the critical resource access process can be divided into 4 stages :

  1. Enter the zone.
  2. Critical zone.
  3. Exit the zone.
  4. The remaining area.

3.1.2, synchronization

Synchronization is also called direct control relationship, which refers to two or more processes established in order to complete a certain task. Therefore, the process needs to coordinate their work order in certain positions while waiting and transmitting information. The direct restriction between processes stems from their mutual cooperation .

3.1.3, mutual exclusion

Mutual exclusion is also called an indirect restriction relationship. When a process enters the critical area to use critical resources, another process must wait. After the process occupying the critical resources exits the critical area, the other process is allowed to enter the critical area to access the critical resources.

3.2 Semaphore

The semaphore mechanism is a powerful mechanism that can be used to solve mutual exclusion and synchronization problems. It can only be accessed by the two standard primitives wait(S) and signal(S), which can also be recorded as "P operation" And "V operation" .

3.2.1, integer semaphore

Integer semaphore is defined as an integer S used to represent the number of resources. Wait and signal operations can be described as:

wait(S){
    
    
	while(S<=0);
	S = S-1;
}
signal(S){
    
    
	S = S+1;
}

3.2.2, record semaphore

typedef struct{
    
    
	int value;
	struct process *L;
}semaphore;

The wait and signal operations can be described as:

void wait(semaphore S){
    
    		//相当于申请资源
	S.value--;
	if(S.value < 0){
    
    
		插入等待队列;
		block(S.L);
	}
}
void signal(semaphore S){
    
    		//相当于释放资源
	S.value++;
	if(S.value <= 0){
    
    
		从队列中唤醒一个进程;
		wakeup(P);
	}
}

3.2.3, classic synchronization problems

See "Classic Synchronization Problem".


4. Deadlock

4.1, the concept of deadlock

4.1.1, the definition of deadlock

The so-called deadlock refers to a deadlock (waiting for each other) caused by multiple programs competing for resources. If there is no external force, these processes will not be able to move forward .

4.1.2, the cause of deadlock

  • Competition for system resources
  • Illegal advancement sequence
  • Necessary conditions for deadlock
    1. Mutually exclusive conditions
    2. Inalienable condition
    3. Request and keep condition
    4. Loop waiting condition

4.2, deadlock processing strategy

4.2.1, deadlock prevention

The 4 necessary conditions for the occurrence of a deadlock can be destroyed:

  1. Destroy mutually exclusive conditions. Allow system resources to be shared.
  2. Undermine the conditions of inalienability.
  3. Break the request and keep the conditions. Pre-static allocation method.
  4. Destroy the loop waiting condition. Sequential resource allocation method.

4.2.2, deadlock avoidance

Banker algorithm

4.2.3, detection and removal of deadlock

The condition for S to be a deadlock is if and only if the resource allocation graph of the S state cannot be completely simplified, this condition is the deadlock theory .

Deadlock release:

  1. Resource Deprivation Act
  2. Revocation process
  3. Process rollback

Guess you like

Origin blog.csdn.net/qq_36879493/article/details/107876502