Process Scheduling, Confessions of a Scheduler

I am a process scheduler.

My job is to schedule all the processes in the computer and allocate CPU resources to them.

1. The era of batch processing

Back then, when the operating system created me, it just wanted me to use the FCFS scheduling algorithm to simply maintain the order of the processes. But my later development was far beyond his imagination.

1.1 FCFS

The so-called FCFS is " First Come First Serve", and each process is queued according to the time it enters the memory. Whenever the process on the CPU finishes running or blocks, I will select the process at the front of the queue and take him to the CPU for execution.

Take these processes for example:

According to the FCFS algorithm, I will send them to the CPU in the order of A, B, C, D, E:

This algorithm sounds simple and fair, but it didn't last long, and I received a complaint from a short process: "Last time I queued a long process in front of me, and it took a full 200 seconds for it to run. I only took 1 second As soon as the run was over, it was not worth it for me to spend so long waiting for him."

When I think about it carefully, the FCFS algorithm does have this flaw - the response time of a short process is too long, and the user interaction experience will be poor.

So I decided to replace the scheduling algorithm.

1.2 SPN

The algorithm I designed this time is called " Shortest Process Next" (SPN). The process with the shortest estimated processing time is selected each time. Therefore, when queuing, I will bring short processes to the front from the queue.

This time, short processes are well taken care of, the average response time of the processes is greatly reduced, and both I and the OS are satisfied.

But the long processes quit: those short processes jump in the queue every day, so that they often get no CPU resources, causing the phenomenon of " starvation ".

The call to cancel the SPN algorithm is growing.

This is a big problem. Although FCFS has a long response time, all processes must have the opportunity to use CPU resources in the end. But the SPN algorithm is different. If short processes are continuously added to the queue, long processes will never get a chance to execute - it's terrible.

Therefore, the short task priority algorithm needs to be improved. Is there any way to take care of both short and long processes?

1.3 HRRN

After discussions with the operating system, we decided to comprehensively consider two attributes of a process: wait time and service time - a process with a long wait time and a short service time (that is, a short process) is more likely to be selected.

To quantify, we developed a formula: Response Ratio = (Wait Time + Service Request Time) / Service Request Time. Algorithms with higher response ratios will be executed first. We call it " Highest Response Ratio Next" (HRRN) .

This algorithm has been well-received by both long and short processes. Although my workload has increased (before each scheduling, I have to recalculate the response ratio of all waiting processes), but for the fairness of the processes, it is all worth it.

2. The era of concurrency

A new era has arrived.

With the popularity of computers and the massive growth of individual users, the need for concurrency , that is, running multiple programs at once, arose. This stumped me - how can I run multiple programs with only one processor?

Fortunately, the CPU woke me up: "Since my current computing speed is so fast, why not take advantage of this advantage and create a "pseudo-parallel"? "

"Pseudo-parallel? What do you mean"

"It looks like parallel, but it's actually serial. Each process alternates my resources for a short period of time, but to humans, the processes appear to be running 'simultaneously'. "

It dawned on me.

2.1 RR

After being reminded by the CPU, I quickly worked out a new scheduling algorithm - time slice rotation algorithm (Round Robin, RR).

In this algorithm, each process will take turns using CPU resources, but when they start running, I will open a timer for them , if the timer expires (or perform a blocking operation), the process will be forced to "off the machine" to switch to the next process. As for the choice of the next process, just use FCFS directly.

The new algorithm will inevitably face new problems. Now my question is, how to design the length of the time slice?

Intuitively, the shorter the time slice, the more processes that can run in a fixed time, but the CPU said that switching processes will consume a lot of instruction cycles, and if the time slice is too short, a lot of CPU resources will be wasted in switching contexts superior. If the time slice is too long, the response of short interactive commands will be slower. So how to take it depends on the size of the interaction time (it feels like I didn't say it, but at least it gave a standard).

At this stage, my workload has been greatly improved - I didn't have to switch programs once for more than ten seconds before, but now I have to switch dozens of times per second.

2.2 VRR

The round-robin algorithm seems fair - all processes have the same time slice. But is it really so?

I/O-intensive processes don't think so. He said to me: "Brother scheduler, time slice rotation doesn't take care of processes like us! We often encounter blocking operations before the CPU stays halfway through the time slice. You go. And we're in the blocking queue , and we tend to stay for a long time. When the blocking operation is over, we have to queue for a long time in the ready queue . Those processor-intensive processes use most of the processor time, As a result, our performance is degraded and the response time cannot keep up.”

Considering the requirements of these processes, I decided to create a new auxiliary queue for them. The unblocked process will enter this auxiliary queue, and the process in the auxiliary queue will be preferentially selected during process scheduling.

This is the " Virtual Round Robin" (VRR).

From the actual performance results later, this method is indeed better than the round-robin method. I am quite proud.

2.3 Priority Scheduling

One day, the operating system suddenly found me and said mysteriously: "Scheduler, you know, I want to provide services to the entire system, but there are too many user processes recently, which causes my service process to sometimes fail to respond. On. I'm a little concerned about the impact on system stability."

As soon as I heard it, this is a big deal, so why is the system unstable? The scheduling algorithm has to be changed!

Since the operating system's services need to get enough running resources, let them have the highest CPU usage priority.

The priority scheduling algorithm was born.

I made a rule to everyone - each process will be given a priority , and you can determine the priority value according to your own situation, but the priority of the user process is not allowed to be higher than the priority of the kernel process.

When switching programs, I will select a process from the priority 1 queue. If the priority 1 queue is empty, I will select the priority 2 process, and so on.

Of course, in order to ensure that low priority processes do not starve , I will increase the priority of processes that wait a long time.

Using this algorithm, I am more busy, not only need to switch processes a lot, but also need to dynamically adjust the priority. Perhaps this is the greater the power, the greater the responsibility.

But I know that it is because of my existence that humans can run multiprogramming on computers - and that makes me proud.

Hope you get something after reading my article.

Thanks for reading, we'll have an appointment later!

Disclaimer: Original article, unauthorized reproduction is prohibited

{{o.name}}
{{m.name}}

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324031528&siteId=291194637