Operating System 6: Task Scheduling and Process Scheduling for Processors

Table of contents

1. The level of processor scheduling

(1) High Level Scheduling

(2) Low Level Scheduling

(3) Intermediate Scheduling

2. The goal of the processor scheduling algorithm

Goals of a batch system

3. Job and job scheduling

(1) Jobs in the batch system

(2) First-come first-served (FCFS) scheduling algorithm

(3) Short job first (shortjob first, SJF) scheduling algorithm

(4) Priority-scheduling algorithm (PSA)

(5) High Response Ratio Priority Scheduling Algorithm (Highest Response Ratio Next, HRRN)

4. Process scheduling

(1) Tasks and mechanisms of process scheduling

(2) The method of process scheduling

(3) Round-robin scheduling algorithm

(4) Priority scheduling algorithm

(5) Multi-queue scheduling algorithm

(6) Multilevel feedback queue (multileved feedback queue) scheduling algorithm

(7) Scheduling algorithm based on the principle of fairness


       In a multiprogramming system, the essence of scheduling is a resource allocation , and processor scheduling is to allocate processor resources. The processor scheduling algorithm refers to the processor allocation algorithm stipulated according to the processor allocation policy. // Resource allocation

1. The level of processor scheduling

(1) High Level Scheduling

        Advanced scheduling is also called long-range scheduling or job scheduling , and its scheduling object is a job. Its main function is to decide which jobs in the backup queue on the external memory to transfer into the memory according to a certain algorithm , create processes for them, allocate necessary resources, and put them into the ready queue. Advanced scheduling is mainly used in multi-channel batch processing systems, and advanced scheduling is not set in time-sharing and real-time systems. // load the job into memory

(2) Low Level Scheduling

        Low-level scheduling is also called process scheduling or short-range scheduling, and the objects it schedules are processes (or kernel-level threads). Its main function is to decide which process in the ready queue should get a processor according to a certain algorithm , and the dispatcher assigns the processor to the selected process. Process scheduling is the most basic type of scheduling. This level of scheduling must be configured in the three types of OSs: multi-channel batch processing, time-sharing, and real-time. // Determine which process gets the CPU

(3) Intermediate Scheduling

        Intermediate scheduling is also called memory scheduling . The main purpose of introducing intermediate scheduling is to improve memory utilization and system throughput. For this reason, those processes that cannot run temporarily should be transferred to the external memory to wait . At this time, the state of the process is called the ready external memory state (or suspended state). When they are ready to run and the memory is slightly free, the intermediate scheduling decides to transfer those ready processes on the external storage that are ready to run into the memory again , and modify their status to be ready and hang them on the ready queue wait. // Intermediate scheduling is actually the swap function in memory management

        Among the above three types of scheduling, process scheduling has the highest operating frequency, and usually only takes 10~100ms to perform a process scheduling in a time-sharing system, so it is called short-range scheduling. To avoid the scheduling itself taking up too much CPU time, it is not advisable to make the process scheduling algorithm too complex. Job scheduling often occurs when a batch of jobs has finished running and exited the system, and a batch of jobs needs to be reloaded into the memory. The cycle of job scheduling is long, about once every few minutes, so it is called long-range scheduling. Since it runs less frequently, it allows the job scheduling algorithm to spend more time. The operating frequency of intermediate scheduling is basically between the above two schedulings, so it is called intermediate scheduling. // According to the scheduling frequency, it is divided into short, medium and long scheduling.

2. The goal of the processor scheduling algorithm

        (1) Resource utilization : In order to improve the resource utilization of the system, the processors and all other resources in the system should be kept as busy as possible. Among them, the most important processor utilization can be calculated by the following method:

        (2) Fairness: Fairness means that all processes should obtain reasonable CPU time, and process starvation will not occur. Fairness is relative, and the same service should be obtained for the same type of process; but for different types of processes, due to their different urgency or importance, different services should be provided.

        (3) Balance. Since there may be many types of processes in the system, some are calculation-type jobs, and some are I/O-type. In order to keep the CPU and various external devices in the system in a busy state, the scheduling algorithm should try to keep the balance of system resource usage. // Balance the use of CPU and external devices

        (4) Policy enforcement. The established policy, including the security policy, must be implemented exactly as long as necessary, even if it causes some work delays.

Goals of a batch system

  1. Average turnaround time is short . The so-called turnaround time refers to the time interval from when a job is submitted to the system to when the job is completed (job turnaround time). It includes four parts of time: the time the job waits for job scheduling on the external memory backup queue , the time the process waits for process scheduling on the ready queue , the time the process executes on the CPU , and the time the process waits for I/O operations to complete . The last three of these may occur multiple times throughout the processing of a job.
  2. System throughput is high . Since throughput refers to the number of jobs completed by the system per unit time, it is related to the average length of batch jobs. In fact, if you simply want to obtain high system throughput, you should choose as many short jobs as possible to run.
  3. Processor utilization is high . For large and medium-sized computers, the CPU is very expensive, making the utilization rate of the processor a very important indicator for measuring system performance; and the scheduling method and algorithm play a very important role in the utilization rate of the processor. If it is purely to increase the utilization rate of processors, as many jobs as possible should be selected to run with a large amount of computation.

        It can be seen from the above that there is a certain contradiction between these requirements.

3. Job and job scheduling

        In a multi-channel batch processing system, a job is a relatively independent work submitted by the user to the system. The operator inputs the job submitted by the user into the disk storage through the corresponding input device, and saves it in a backup job queue. Then it is transferred from the external storage to the internal memory by the job scheduler. // Load the program from external storage to internal memory

(1) Jobs in the batch system

        1 - Jobs and job steps

        Job : The job not only includes the usual programs and data, but also should be equipped with a job description, and the system controls the operation of the program according to the description. In a batch processing system, jobs are transferred from external storage to internal memory based on the job. // job is a broader concept than program

        Job Step : Usually, during the operation of the job, each job must go through several relatively independent and interrelated sequential processing steps to obtain the result. We call each of the processing steps a job step , and there is a relationship between each job step, and the output of the previous job step is often used as the input of the next job step. For example, a typical job can be divided into: "compile" job step, "link assembly" job step and "run" job step. // compile -> assemble -> run

        2 - Job Control Block (JCB)

In order to manage and schedule jobs, in the multi-channel batch processing system, a job control block JCB         is set for each job, which is a sign of the existence of the job in the system, and stores all the information required by the system for job management and scheduling information.

        The contents usually included in JCB are: job identification, user name, user account, job type (CPU busy type, I/O busy type, batch type, terminal type), job status, scheduling information (priority, job running time ), resource requirements (estimated running time, required memory size, etc.), resource usage, etc.

        Whenever a job enters the system, a job control block JCB is created for the job by the "job registration" program. Then according to the job type, put it into the corresponding job backup queue to wait for scheduling. The scheduler schedules them according to a certain scheduling algorithm, and the scheduled jobs will be loaded into memory. During the running of the job, the system controls the job according to the information in the JCB and the job specification. When a job finishes executing and enters the completion state, the system is responsible for reclaiming the resources allocated to it and canceling the job control block. // The scheduling process of JCB, similar to the process

        3 - Three phases and three states of a job run

        From entering the system to the end of operation, the operation usually needs to go through three stages : containment , operation and completion . The corresponding jobs also have "standby status" , "running status" and "completion status" .

  • Containment stage: The operator inputs the job submitted by the user to the hard disk through some input method or SPOOLing system, then creates a JCB for the job, and puts it into the job backup queue . Correspondingly, the state of the job at this time is "standby state".
  • Running phase: When the job is selected by job scheduling, it will allocate necessary resources and establish a process for it, and it into the ready queue . A job is in the "running state" from the time it first enters the ready state until it finishes running.
  • Completion stage: When the job is completed or terminated prematurely due to an abnormal situation, the job enters the completion stage, and the corresponding job status is "Completion Status". At this time, the "terminate job" program in the system will reclaim the job control block and all resources allocated to the job , and output the job running result information into an output file.

        In a batch processing system, after a job enters the system, it always resides in the job backup queue of the external storage first, so job scheduling is required to load them into the memory in batches. // external storage -> internal memory

        In a time-sharing system, in order to achieve timely response, the commands or data entered by the user through the keyboard are directly sent to the memory, so there is no need to configure the above-mentioned job scheduling mechanism, but some kind of admission control measures are also required to restrict access to the system number of users. That is, if the system still has the ability to handle more tasks, it will accept the request of the authorized user, otherwise, it will refuse to accept it. // type

        In real-time systems, there is also no need for job scheduling, but admission control must be in place.

(2) First-come first-served (FCFS) scheduling algorithm

        FCFS is the simplest scheduling algorithm, which can be used for both job scheduling and progress. When this algorithm is used in job scheduling, the system will schedule jobs according to the order in which they arrive, or it will give priority to the job with the longest waiting time in the system , regardless of the length of execution time required for the job, from Select several jobs that enter the queue first in the backup job queue, transfer them into memory, allocate resources and create processes for them. Then put it in the ready queue.

        When the FCFS algorithm is used in process scheduling, each scheduling is to select a process that first enters the queue from the ready process queue, allocate a processor to it, and put it into operation. The process runs until it completes or is blocked by an event before the process scheduler assigns the processor to another process. // first in first out

        By the way, the FCFS algorithm is rarely used as the main scheduling algorithm in single-processor systems, but it is often used in combination with other scheduling algorithms to form a more effective scheduling algorithm. For example, multiple queues can be set in the system according to the priority of the process, one queue for each priority, and the scheduling of each queue is based on the FCFS algorithm.

(3) Short job first (shortjob first, SJF) scheduling algorithm

        In actual situations, short jobs (processes) occupy a large proportion, in order to enable them to be executed prior to long jobs, a short job priority scheduling algorithm was developed. // Reduce the average waiting time of jobs

        1 - Shortest job first algorithm

        The SJF algorithm calculates the priority based on the length of the job. The shorter the job, the higher the priority . The length of a job is measured in the amount of time the job requires to run. The SJF algorithm can be used for job scheduling and process scheduling respectively. When the short job priority scheduling algorithm is used for job scheduling, it will select several jobs with the shortest estimated running time from the job backup queue in external storage, and transfer them to the memory to run first. // The main problem is that the time of the job cannot be accurately calculated

        2 - Disadvantages of short job first algorithm

        Compared with the FCFS algorithm, the SJF scheduling algorithm has been significantly improved, but there are still shortcomings that cannot be ignored:

  • The runtime of the job must be known in advance . When using this algorithm, you must first know the running time of each job. It is difficult even for programmers to accurately estimate the running time of a job. If the estimate is too low, the system may terminate the job at the estimated time. However, the job has not been completed at this time, so it is generally estimated longer. // this is very troublesome
  • It is very unfavorable for long jobs, and the turnaround time of long jobs will increase significantly. What's more, the algorithm completely ignores the waiting time of jobs, which may make jobs wait too long and cause starvation .
  • When using the FCFS algorithm, human-computer interaction cannot be realized .
  • The scheduling algorithm does not consider the urgency of the job at all, so it cannot guarantee that the urgent job can be processed in time.

(4) Priority-scheduling algorithm (PSA)

        Neither the priority of the first-come-first-serve scheduling algorithm nor the short job priority scheduling algorithm can reflect the urgency of the job.

        In the priority scheduling algorithm, based on the urgency of the job , the corresponding priority is given to the job by the outside, and the scheduling algorithm schedules according to the priority.

        The priority scheduling algorithm can ensure that urgent jobs run first, and it can be used as a job scheduling algorithm or a process scheduling algorithm. When using this algorithm for job scheduling, the system selects several jobs with the highest priority from the backup queue and loads them into memory.

(5) High Response Ratio Priority Scheduling Algorithm (Highest Response Ratio Next, HRRN)

        In a batch processing system, the FCFS algorithm only considers the waiting time of the job, but ignores the running time of the job. The SJF algorithm is just the opposite, only considering the running time of the job, but ignoring the waiting time of the job.

        The high response ratio priority scheduling algorithm is a scheduling algorithm that considers both the waiting time of the job and the running time of the job . Therefore, short jobs are taken care of, and the waiting time of long jobs is not too long, thereby improving the performance of processor scheduling. // wait + run

        How is the high response ratio priority algorithm implemented?

        If a dynamic priority is introduced for each job so that it increases with the extension of the waiting time, the priority of long jobs will continue to increase during the waiting period, and after enough time, there must be a chance to get a processor. The change rule of this priority can be described as:

        Since the sum of waiting time and service time is the response time of the system to the job, the priority is equivalent to the response ratio Rp. Accordingly, priority can be expressed as:

        It can be seen from the above formula:

  • If the waiting time of jobs is the same, the shorter the service time is, the higher the priority is, so similar to the SJF algorithm, it is beneficial to short jobs.
  • When the time required for service is the same, the priority of the job is determined by its waiting time, so this algorithm is similar to the FCFS algorithm.
  • The priority of long jobs can be increased as the waiting time increases, and processors can also be obtained when the waiting time is long enough.

        Therefore, the algorithm achieves a better compromise. However, when using this algorithm, it is necessary to calculate the response ratio before each scheduling, which will obviously increase the system overhead . // Compromise ideas are everywhere in computer systems

4. Process scheduling

        Process scheduling is an essential scheduling in OS. It is also the type of CPU scheduling that has the greatest impact on system performance.

(1) Tasks and mechanisms of process scheduling

        1 - Tasks scheduled by the process

  • Save the site information of the processor . When scheduling, it is first necessary to save the on-site information of the processor of the current process, such as the program counter, the contents of multiple general-purpose registers, and the like. // Save the current process data
  • Processes are selected according to an algorithm . The scheduler selects a process from the ready queue according to a certain algorithm, changes its state to the running state, and prepares to assign the processor to it. // select process
  • Assign processors to processes . The processor is assigned to the process by the dispatch program. At this time, the information about the processor site in the process control block of the selected process needs to be loaded into the corresponding registers of the processor, and the control right of the processor is given to the process, so that It resumes from where it left off. // resume running

        2 - Process Scheduling Mechanism

        In order to realize process scheduling, the process scheduling mechanism should have the following three basic parts:

        Queuer . In order to improve the efficiency of process scheduling, all ready processes in the system should be arranged in one or more queues according to a certain strategy in advance, so that the scheduler can find it as quickly as possible. In the future, whenever a process changes to the ready state, the queuer will insert it into the corresponding ready queue.

        dispatcher . The dispatcher takes the process selected by the process scheduler out of the ready queue, then performs a context switch from the dispatcher to the newly selected process, and assigns the processor to the newly selected process.

        context switcher . When switching processors, two pairs of context switching operations occur:

  • When the first pair of context switches, the OS will save the context of the current process, that is, save the contents of the processor registers of the current process to the corresponding units in the process control block of the process, and then load the context of the dispatcher so that the dispatcher can run. // uninstall
  • The second pair of context switching is to remove the context of the dispatcher, and load the CPU context information of the newly selected process into each corresponding register of the processor so that the newly selected process can run. // assembly

        When performing context switching, it is necessary to execute a large number of operation instructions such as Load and Store to save the contents of the register. Even a modern computer can execute thousands of instructions per context switch . For this reason, there are now hardware-implemented methods to reduce context switching time. Generally, two (or more) sets of registers are used, one set of registers is used by the processor in the system state, and the other set of registers is used by the application program. Context switching under such conditions only needs to change the pointer so that it points to the current register bank. // Use multiple sets of registers to reduce context switch time

(2) The method of process scheduling

        The non-preemptive method has great limitations, and it is difficult to meet the needs of interactive jobs and real-time tasks. Therefore, preemption is introduced in process scheduling.

        1 - Nonpreemptive Mode

        When using this scheduling method, once the processor is assigned to a process, it will keep running, and will never seize the CPU of the currently running process due to clock interruption or any other reason , until the process is completed, or When an event occurs and is blocked, the processor is allocated to other processes.

        When using non-preemptive scheduling, the factors that may cause process scheduling can be summarized as follows:

  • An executing process has finished running, or an event has occurred that prevents it from continuing
  • A running process is suspended due to an I/O request
  • During process communication or synchronization, some primitive operation, such as Block primitive, is performed.

        The advantage of this scheduling method is that it is simple to implement, and the system overhead is small, and it is suitable for most batch processing systems . But it cannot be used in time-sharing systems and most real-time systems.

        2 - Preemptive Mode (PreemptiveMode)

        This scheduling method allows the scheduler to suspend an executing process according to a certain principle, and reassign the processor assigned to the process to another process .

        Preemption is widely used in modern OS because:

  • For batch processor systems, it can prevent a long process from occupying the processor for a long time, so as to ensure that the processor can provide more fair services to all processes.
  • In the time-sharing system, it is possible to realize human-computer interaction only by adopting the preemptive method.
  • In real-time systems, preemption can meet the needs of real-time tasks.

        The preemption method is more complicated, and the system overhead required is also relatively large. "Preemption" is not an arbitrary act , but certain principles must be followed. The main principles are:

  • The priority principle refers to allowing a new process with a high priority to preempt the processor of the current process.
  • The principle of short process priority means that a newly arrived short process can preempt the processor of the current long process.
  • Time slice principle, that is, when each process runs in turn according to the time slice, when a time slice of the process being executed is used up, the execution of the process is stopped and the scheduling is re-scheduled.

(3) Round-robin scheduling algorithm

        In the time-sharing system, the simplest and most commonly used is the round robin (RR) scheduling algorithm based on time slices . The algorithm adopts a very fair processor allocation method, that is, each process on the ready queue runs only one time slice at a time . If there are n processes on the ready queue, each process gets approximately 1/n of the processor time each time.

        1 - Fundamentals of the Rotational Method

        In the round robin (RR) method, the system arranges all ready processes into a ready queue according to the FCFS strategy . The system can be set to generate an interrupt every certain time (such as 30ms), deactivate the process scheduler for scheduling, allocate the CPU to the team leader process, and make it execute a time slice. When it finishes running, it assigns the processor to the new leader process in the ready queue, and lets it execute a time slice. In this way, it can be guaranteed that all processes in the ready queue can obtain a time slice of processor time within a certain period of time .

        2 - Process switch timing

        In the RR scheduling algorithm, when the process should be switched can be divided into two situations:

  • If a time slice has not been used up and the running process has completed , activate the scheduler immediately, delete it from the ready queue, schedule the process at the head of the ready queue to run, and start a new time slice.
  • When a time slice expires , the timer interrupt handler is activated. If the process has not finished running, the scheduler will send it to the end of the ready queue.

        3 - Determination of time slice size

        In the round-robin algorithm, the size of the time slice has a great influence on the system performance. Choosing a small time slice will benefit short jobs because it can be completed within that time slice. However , the small time slice means that process scheduling and process context switching will be performed frequently , which will undoubtedly increase the overhead of the system. Conversely, if the time slice is selected too long, and in order to make each process complete within one time slice, the RR algorithm degenerates into the FCFS algorithm , which cannot meet the needs of short jobs and interactive users.

        A more desirable time slice size is slightly larger than the time required for a typical interaction , so that most interactive processes can be completed within a time slice, so that a small response time can be obtained.

        The figure below shows the effect on the average turnaround time when the time slices are q=1 and q=4 respectively.

        // In the above example, when q=4, each job can be executed within one time slice. So it will be better than q=1

(4) Priority scheduling algorithm

        In the time slice round-robin scheduling algorithm, the urgency of all processes in the default system is the same. But this is not the case. In order to meet the needs of the actual situation, priority is introduced into the process scheduling algorithm to form a priority scheduling algorithm.

        1 - Type of priority scheduling algorithm

        The priority process scheduling algorithm is to assign the processor to the process with the highest priority in the ready queue. The algorithm can be divided into the following two types:

  • Non-preemptive priority scheduling algorithm. Once the processor is assigned to the process with the highest priority in the ready queue, the process will continue to execute until it is completed , or when the processor is abandoned due to an event in the process, the system can reassign the processor to another priority. highest level process. // process will not be preempted
  • Preemptive priority scheduling algorithm. Assign the processor to the process with the highest priority and let it execute. But during its execution, as long as another process with higher priority appears, the scheduler will assign the processor to the newly arrived process with the highest priority . Therefore, when using this scheduling algorithm, whenever a new ready process i appears in the system, its priority P(i) is compared with the priority P(j) of the executing process j, if P (j) >= P(i), the original process j will continue to execute; but if P(j) < P(i), the execution of j will be stopped immediately, and the process will be switched to put the i process into execution. // Preemptive, when there is a new process, it will be compared every time

        2 - Type of priority

        The key to the priority scheduling algorithm is: how to determine the priority of the process, and determine whether to use static priority or dynamic priority.

  • Static priority: The static priority is determined when the process is created and remains constant throughout the running of the process . The priority is represented by an integer within a certain range, such as an integer from 0 to 255, which is also called the priority number. There are three bases for determining the priority of a process: process type, resource requirements of the process, and user requirements (urgency). The static priority method is simple and easy to implement, and the system overhead is small, but it is not accurate enough, and it may happen that processes with low priority are not scheduled for a long time . // It is possible that the process is not scheduled or the thread is starved
  • Dynamic priority: At the beginning of creating a process, it is first given a priority, and then its value changes as the process advances or the waiting time increases, in order to obtain better scheduling performance . For example, it can be stipulated that the process in the ready queue increases with its waiting time, so that its priority increases accordingly. If all processes have the same initial value of priority, the process that enters the ready queue first will get the processor first because of its priority, which is equivalent to the FCFS algorithm. If all the ready processes have different Priority initial value, then for the process with low priority initial value, after waiting for enough time, the processor can also be obtained. When the preemptive scheduling method is used, if the priority of the current process is stipulated to decrease with the running time, it can prevent a long job from monopolizing the processor for a long time .

(5) Multi-queue scheduling algorithm

        If only one process's ready queue is set in the system, then the low-level scheduling algorithm is fixed and single, which cannot meet the different requirements of different users in the system for the process scheduling strategy. In a multiprocessor system, the implementation mechanism of this single scheduling strategy is The shortcomings are more prominent. The multi-level queue scheduling algorithm can make up for this shortcoming to a certain extent.

        This algorithm splits the process ready queue in the system from one process into several , and fixedly allocates processes of different types or properties in different ready queues . Different ready queues use different scheduling algorithms. The processes in a ready queue can be set Different priorities, different ready queues themselves can also set different priorities. // Split multiple columns

        Multi-queue scheduling algorithm Since multiple ready queues are set, different scheduling algorithms can be implemented for each ready queue. Therefore, the system can easily provide multiple scheduling strategies according to the needs of different user processes.

(6) Multilevel feedback queue (multileved feedback queue) scheduling algorithm

        The multi-level feedback queue scheduling algorithm does not need to know the execution time of various processes in advance , and can better meet the needs of various types of processes, so it is currently recognized as a better process scheduling algorithm.

        1 - Scheduling mechanism

        The scheduling mechanism of the multi-level feedback queue scheduling algorithm can be described as follows:

        (1) Set up multiple ready queues. Set up multiple ready queues in the system and give each queue a different priority . The priority of the first queue is the highest, followed by the second, and the priority of the remaining queues decreases one by one. The algorithm assigns different execution time slices to processes in different queues , and the higher the priority queue, the smaller the time slice. For example, the time slice of the second queue is twice as long as that of the first one, and the time slice of the i+1th queue is twice as long as that of the i-th queue.

        The following figure is a schematic diagram of the multi-level feedback queue algorithm:

        (2) Each queue adopts FCFS algorithm. When a new process enters the memory, it is first placed at the end of the first queue and waits for scheduling according to the FCFS principle. When it is the turn of the process to execute, if it can complete within the time slice, the system can be evacuated. Otherwise, that is, it has not been completed at the end of a time slice, the scheduler will transfer it to the end of the second queue to wait for scheduling; Three queues, and so on. When the process is finally dropped to the nth queue, it will run in the RR mode (time slice rotation) in the nth queue. // The process continues to degrade the queue

        (3) Scheduling according to queue priority. The scheduler first schedules the processes in the highest priority queue to run, and only schedules the processes in the second queue to run when the first queue is free: in other words, only when all queues 1~(i-1) are empty, then will schedule the processes in queue i to run. If a new process enters any queue with higher priority while the processor is serving a process in the i-th queue, the running process must be immediately put back to the end of the i-th queue, and the processor is allocated For newly arrived high-priority processes.

(7) Scheduling algorithm based on the principle of fairness

        1 - Guaranteed scheduling algorithm

        The guarantee that the guaranteed scheduling algorithm makes to the user is not a priority operation, but a clear performance guarantee, and the algorithm can achieve fairness in scheduling.

        A performance guarantee that is relatively easy to implement is the fairness of processor allocation . If there are n processes of the same type running at the same time in the system, for the sake of fairness, it is necessary to ensure that each process gets the same processor time 1/n. When implementing the fair scheduling algorithm, the system must have the following functions: // Ensure that each process can be allocated to the CPU fairly

  • The trace calculates the processing time each process has executed since its creation.
  • Computes the amount of processor time each process should get as the time since creation divided by n.
  • Calculate the ratio of the processing time obtained by the process, that is, the ratio of the processing time actually executed by the process to the processing time it should obtain.
  • Compare the rate at which each process gets processor time. For example, process A has the lowest ratio at 0.5, while process B has a ratio of 0.8, process C has a ratio of 1.2, and so on.
  • The scheduler should choose the process with the smallest ratio to assign a processor to it, and let that process run until the ratio of the process closest to it is exceeded.

        2 - Fair Share Scheduling Algorithm

        Allocating the same processor time to each process, obviously, reflects a certain degree of fairness for all processes, but if the number of processes owned by each user is different, the problem of unfairness to users will occur. If there are only two users in the system, user 1 starts 4 processes, and user 2 only starts 1 process. It is fair for the process to use the rotation method to let each process run for a time slice, but user 1 and User 2 gets 80% and 20% of the processor time respectively, which is obviously unfair to user 2. In this scheduling algorithm, the fairness of scheduling is mainly aimed at users, so that all users can get the same processor time, or the required time ratio. However, scheduling is based on the process as the basic unit. Therefore, the number of processes owned by each user must be taken into account . // Looking at fairness from the perspective of users and processes, two different ideas

Guess you like

Origin blog.csdn.net/swadian2008/article/details/131303092