Illustrate the classic process scheduling algorithm

The full text of the mind map is as follows:

1. The concept of scheduling

When the CPU has a bunch of tasks to process, due to its limited resources, these things cannot be processed at the same time. This requires certain rules to determine the order of processing these tasks. This is the problem of "scheduling" research. In addition to the process scheduling that will be discussed next, there are also job scheduling, memory scheduling, and so on.

Recall the three-state model of the process:

  • Running state (running): The process occupies the CPU and is running.
  • Ready state (ready): The process has the running conditions, waiting for the system to allocate CPU to run.
  • Blocking state /waiting state (wait): The process does not have the running conditions and is waiting for the completion of an event.

The so-called process scheduling is to select a process from the ready queue (blocking) of the process according to a certain algorithm and assign the CPU to it to run , so as to realize the concurrent execution of the process. This is the most basic (lowest level) scheduling in the operating system, and process scheduling must be configured in general operating systems. The frequency of process scheduling is very high, generally every tens of milliseconds.

2. Non-preemptive process scheduling algorithm

The so-called non-preemptive means that when a process is running, it will continue to run until the process is completed or an event occurs and is blocked, before giving up the CPU to other processes.

Correspondingly, preemptive means that when a process is running, it can be interrupted and give up the CPU to other processes.

① First come first serve FCFS

First Come First Serve (FCFS): Scheduling is performed according to the order of arrival of the processes, and the processes that arrive first are scheduled first , that is, the longer the waiting time, the higher the priority to get the service.

Advantages: fairness, simple algorithm implementation

Disadvantages: Not good for short processes. The short process behind the long process needs to wait a long time, and the response time of the short process is too long, and the user interaction experience will be worse.

② SJF is given priority to the shortest job

Shortest Job/Process Priority Scheduling Algorithm (Shortest Job First, SJF): Select the process that has arrived and has the shortest running time each time it is scheduled .

The shortest job-first algorithm is the opposite of first-come, first-served. First-come, first-served is not good for short processes, while the shortest job-first algorithm is not good for long-ranges. Because if there are always short processes coming, then the long process will never be scheduled, and the long process may starve to death, waiting for the completion of the short job.

③ High response ratio priority HRRN

Highest Response Ratio Next (HRRN): Only when the currently running process actively abandons the CPU (normal/abnormal completion, or active blocking), it needs to be scheduled. The response ratio of all ready processes is calculated during scheduling, which is The process with the highest response ratio allocates CPU . Response ratio = (waiting time of the process + running time required by the process) / running time required by the process

3. Preemptive process scheduling algorithm

Preemption means that when a process is running, it can be interrupted and give up the CPU to other processes. There are generally three principles of preemption, namely the time slice principle, the priority principle, and the short assignment priority principle.

① The shortest remaining time gives priority to SRTN

The Shortest Remaining Time Next (SRTN) algorithm is a preemptive version of the shortest job first .

When a new process arrives, compare the total running time it needs with the remaining running time of the current process. If the new process takes less time, the current process is suspended and the new process runs, otherwise the new process waits.

② Round-robin scheduling algorithm RR

Round Robin (RR) is also called time slice scheduling algorithm: each time the scheduler allocates CPU to the first process of the ready queue, it uses a specified time interval, called time slice, usually 10ms ~ 200ms, every time in the ready queue Each process runs a time slice in turn, and when the time slice is exhausted, the current running process is forced to give up CPU resources and queue to the end of the ready queue, waiting for the next round of scheduling . Therefore, a process generally requires multiple rotations to complete.

The round-robin scheduling algorithm treats every process equally, just like everyone is queuing up, one by one, everyone runs for a while and then re-queues to wait for it to run.

It should be noted that the length of the time slice is a very critical factor:

  • If the time slice is set too short, it will cause frequent process context switching and reduce CPU efficiency;
  • If the time slice is set too long, then as the number of processes in the ready queue increases, the total time consumed by one rotation will increase, that is, the corresponding speed of each process will slow down. Even if the time slice is large enough for the process to complete all its tasks, the RR scheduling algorithm degenerates into the FCFS algorithm.

4. The highest priority scheduling algorithm HPF

The RR scheduling algorithm has the same strategy for all processes. If there are too many user processes, it may cause the kernel's service process to fail to keep up with the response. In the operating system, the kernel process is more important than the user process, after all, it is related to the stability of the entire system.

The highest priority scheduling algorithm (Highest Priority First, HPF) is to select the highest priority process from the ready queue to run . How is the priority of the process defined? Divided into static priority or dynamic priority:

  • Static priority : When a process is created, the priority is specified in advance, and the priority of the process will not change during the entire running process. Generally speaking, the priority of the kernel process is higher than the user process.
  • Dynamic priority : Adjust the priority according to the dynamic changes of the process. For example, as the running time of the process increases, its priority is appropriately reduced; as the waiting time of the process in the ready queue increases, its priority is appropriately increased.

In addition, it should be noted that the highest priority algorithm is not a fixed preemptive strategy or non-preemptive strategy. The system can predefine which strategy to use :

  • Non-preemptive: When a high-priority process appears in the ready queue, after the current process has finished running, the high-priority process is selected.
  • Preemptive: When a high-priority process appears in the ready queue, the CPU resources of the currently running process are immediately forcibly deprived and assigned to the higher-priority process to run.


Author: Flying veal
link: https: //juejin.cn/post/6931957287040843784
Source: Nuggets
 

Guess you like

Origin blog.csdn.net/m0_50180963/article/details/113935642