Condensed Notes on Operating Systems (6)-Scheduling Algorithm

 Main quotes from the article notes:

Axiu’s study notes (interviewguide.cn)

Kobayashi coding (xiaolincoding.com)

[Operating System] 2.4 Process Management (Scheduling Algorithm)_Process Waiting Time_coolcoo1cool’s Blog-CSDN Blog

How much do you know about process scheduling algorithms?

1.  First-come first-serverd (FCFS)

A non-preemptive scheduling algorithm that schedules in the order of requests .

It is good for long jobs, but not good for short jobs, because short jobs must wait for the previous long jobs to be completed before they can be executed, and long jobs need to be executed for a long time, resulting in short jobs waiting for too long.

2.  Shortest job first (SJF)

A non-preemptive scheduling algorithm that schedules in the order of the shortest estimated running time .

Long jobs may starve to death, waiting for short jobs to complete. Because if there are always short jobs coming, then long jobs will never be scheduled.

3. Shortest remaining time first (SRTN)

A preemptive version with shortest jobs first, scheduled in order of remaining run time . When a new job arrives, its entire running time is compared with the remaining time of the current process.

If the new process requires less time, the current process is suspended and the new process is run. Otherwise the new process waits.

4. Time slice rotation

All ready processes are arranged into a queue according to the FCFS principle. Each time it is scheduled, CPU time is allocated to the process at the head of the queue, which can execute a time slice.

When the time slice runs out, a clock interrupt is issued by the timer, and the scheduler stops the execution of the process and sends it to the end of the ready queue, while continuing to allocate CPU time to the process at the head of the queue.

The efficiency of the time slice rotation algorithm has a great relationship with the size of the time slice:

  • Because process switching requires saving process information and loading new process information, if the time slice is too small, process switching will occur too frequently and too much time will be spent on process switching.
  • And if the time slice is too long, real-time performance cannot be guaranteed.

5. Priority scheduling

Assign a priority to each process and schedule according to priority.

To prevent low-priority processes from never waiting for scheduling, you can increase the priority of waiting processes over time.

6. Multi-level feedback queue

A process needs to execute 100 time slices. If the time slice rotation scheduling algorithm is used, 100 exchanges are required.

Multi-level queues are considered for processes that need to execute multiple time slices continuously. It sets up multiple queues. Each queue has a different time slice size . If the process does not finish executing in the first queue, it will be moved to the next queue. a queue.

Each queue has a different priority, with the top queue having the highest priority. Therefore, the process on the current queue can be scheduled only if there is no process in the previous queue.

This scheduling algorithm can be regarded as a combination of the time slice rotation scheduling algorithm and the priority scheduling algorithm.

7. High response ratio priority scheduling algorithm

Mainly weighing short jobs and long jobs.

Each time process scheduling is performed, the "response ratio priority" is first calculated, and then the process with the highest "response ratio priority" is put into operation. The calculation formula of "response ratio priority" is:

From the above formula, we can find:

  • If the "waiting time" of two processes is the same, the shorter the "required service time", the higher the "response ratio", so that the process with a short job is easily selected to run;
  • If the "required service time" of two processes is the same, the longer the "waiting time", the higher the "response ratio", which takes into account long job processes, because the response ratio of the process can increase as the waiting time increases. When it waits long enough, its response ratio can rise to a high level, giving it a chance to run;

Various indicators in process scheduling:

CPU utilization

  • CPU utilization: refers to the proportion of CPU busy time to total time
  • Utilization = busy time/total time

System throughput

  • System throughput: the number of jobs completed per unit time
  • System throughput = total number of jobs completed / total time spent

Turnaround time

  • Turnaround time: the time interval from when a user job is submitted to the system to when the job execution is completed
    • The time the job waits on the external storage backup queue for job scheduling
    • The time a process waits on the ready queue for process scheduling
    • The execution time of the process on the CPU
    • The time the process waits for the I/O operation to complete
  • Turnaround time = job completion time – job submission time 
  • Average turnaround time = sum of turnaround times for each job/number of jobs 

waiting time

Refers to the sum of the time a process/job is waiting for a processor or I/O device to serve it . The longer the waiting time, the lower the user satisfaction.
 

Response time

Response time: The time from when the user submits the request to the first response.

Page replacement algorithm

1. Optimal replacement method (OPT)

The basic idea of ​​the optimal page replacement algorithm is to replace pages that will not be visited for the longest time in the "future" . This ensures the lowest page fault rate. 

It cannot be implemented in the actual system, and the waiting time before the "next" access of each page cannot be predicted.

The function of this algorithm is to measure the efficiency of your algorithm. The closer the efficiency of your algorithm is to the efficiency of this algorithm, it means that your algorithm is efficient.

2. First-in-first-out replacement algorithm (FIFO)

Since we cannot predict the waiting time required before the next access of the page, we can choose to replace the page that has a long memory residence time . Implementation method: Arrange the pages transferred into the memory into a queue according to the order in which they are transferred. When the page needs to be swapped out, the maximum length of the queue head page depends on how many memory blocks the system has allocated to the process. 

Belady anomaly—when the number of physical blocks allocated to a process increases , the number of page faults increases instead of decreasing.

3. Most recently not used replacement algorithm (LRU)

When a page fault occurs, the page that has not been accessed for the longest time is selected for replacement . In other words, the algorithm assumes that pages that have not been used for a long time are likely to not be used for a long time in the future.

Implementation method: assign the corresponding page table entry to each page, and use the access field to record the time t that the page has experienced since the last time it was accessed (the implementation of this algorithm requires special hardware support. Although the algorithm has good performance, it is difficult to implement. , high overhead). When a page needs to be eliminated, select the page with the largest t value among the existing pages, that is, the page that has not been used for the longest time.

LRU has better performance, but requires hardware support for registers and stacks. LRU is a stack algorithm. It can be theoretically proven that Belady anomalies are impossible to occur in stack algorithms.

4. Clock replacement algorithm (CLOCK)

Save all pages in a " circular linked list " similar to a clock face , with a clock hand pointing to the oldest page.

When a page fault occurs, the algorithm first checks the page pointed to by the table needle:

  • If its access bit is 0 , eliminate the page , insert the new page into this position, and then move the watch hand forward one position;

  • If the access bit is 1, clear the access bit and move the pointer forward one position . Repeat this process until a page with an access bit of 0 is found;

5. Improved clock replacement algorithm

The simple clock replacement algorithm only takes into account whether a page has been accessed recently. In fact, if the eliminated pages have not been modified, there is no need to perform I/O operations to write them back to external memory. Only the eliminated pages need to be written back to external storage after they have been modified.

Therefore, in addition to considering whether a page has been accessed recently, the operating system should also consider whether the page has been modified. When other conditions are equal, pages that have not been modified should be eliminated first to avoid I/O operations. This is the idea of ​​the improved clock replacement algorithm. Modification bit = 0, indicating that the page has not been modified; modification bit = 1, indicating that the page has been modified.

6. The least commonly used algorithm

When a page fault occurs, the page with the least number of visits is selected and eliminated . Set an " access counter " for each page . Whenever a page is accessed, the access counter of the page accumulates by 1 . When a page fault occurs, the page with the smallest counter value is eliminated.

The LFU algorithm only considers the frequency issue and does not consider the time issue. The frequency of access was very high in the past time, but there is no access now  . There are still solutions to this problem, which can reduce the number of visits regularly.

7. Summary

algorithm rules Advantages and Disadvantages
OPT Prioritize elimination of pages that will not be visited for the longest period of time The page fault rate is the smallest and the performance is the best; but it cannot be achieved
FIFO Prioritize the elimination of the page that first enters the memory. The implementation is simple; but the performance is very poor and Belady exception may occur.
LRU Prioritize the elimination of pages that have not been visited for the longest time The performance is very good; but it requires hardware support and the algorithm overhead is large
CLOCK (NRU) Cyclically scan each page to eliminate the access bit = 0 in the first round, and change the access bit of the scanned page to 1. If the first round is not selected, the second round of scanning will be carried out. The implementation is simple and the algorithm overhead is small; but it does not consider whether the page has been modified.
Improved CLOCK (Improved NRU) If expressed in the form of (access bit, modification bit), the first round: eliminate (0,0), the second round: eliminate (O,1), and set all scanned page access bits to 0, the third round : Elimination (O, 0) Fourth round: Elimination (0, 1) The algorithm overhead is small and the performance is good

Several common disk scheduling algorithms

The purpose of the disk scheduling algorithm is very simple, which is to improve disk access performance , usually by optimizing the order of disk access requests.

Seek time is the most time-consuming part of disk access . If the request sequence is optimized properly, some unnecessary seek time can be saved, thereby improving disk access performance.

1. First come, first served

Scheduled in the order of disk requests .

The advantages are fairness and simplicity. The disadvantage is also obvious, because no optimization has been done to seek, so the average seek time may be longer.

2. Shortest seek time first

Prioritize requests with the shortest seek time from the current head position

Although the average seek time is relatively low, it is not fair enough. If a newly arrived track request is always closer than a waiting track request, then the waiting track request will wait forever, that is, starvation occurs. Starvation occurs because the head moves back and forth over a small area .

3. Scanning algorithm

The magnetic head moves in one direction, accessing all outstanding requests, and does not change direction until the magnetic head reaches the last track in that direction. This is the Scan algorithm .

Optimization: The head moves to the "farthest request" position, and then immediately moves in the opposite direction.

4. Loop scanning algorithm

It always scans in the same direction, resets the head, and does not process any requests during the return process . The characteristic of this algorithm is that the track only responds to requests in one direction .

Optimization: The magnetic head moves to the "farthest requested" position and then immediately resets the magnetic head .

Guess you like

Origin blog.csdn.net/shisniend/article/details/131863492