Process scheduling algorithm summary

FCFS (First come first serve) first come first serve algorithm:

Simple queuing algorithms, maintain a queue, then the only end of the line waiting for.

Non-preemptive.

Disadvantages: not smart enough to cpu-intensive processes unfriendly, such as: a cpu-intensive process requires only 1ms running time, but there is a process of intensive io to read 5s before. Even if the cpu-intensive processes only need to run 1ms, but must wait for a full 5s to run!

 

 

SJF (Shortest job first) Shortest job first:

This algorithm assumes that we can predict the time required for each process running. For example, there are five processes, each running time is: 1,3,2,5,1.

This algorithm name suggests, the process of running the shortest time needed to take the first run, so that the execution order: 1,1,2,3,5. In fact, in the greedy algorithm method.

SJF scheduling is optimal schedule (greedy), but the question is how to predict running time? So in fact this algorithm only as a comparison, not really achieve.

Non-preemptive.

 

SJF algorithm and a variant: the shortest remaining time priority. Select the remaining operating time is minimal scheduling process, so the algorithm is preempted .

 

RR (Round robin) round robin scheduling:

Each process is assigned a time slot, is generally 20 ~ 50ms. Each process can only use up time of a time slice, if you do not run out of time slice on the run is finished, you can schedule the next process directly; if not run out of time slice operation is completed, the system will process back on the team tail.

Slice selection time is required, because in the process of switching to context switch. If context switching time than a small number of time slots, round-robin scheduling efficiencies that will be very bad, because we spent too much time on the switching process.

Since the system will automatically switch to the next process after the completion of a time slice, the RR scheduling algorithm is preempted.

 

Priority Scheduling:

Consider RR scheduling said before, it all processes "equal." But in practice is not the case, people have rank or grade, the process can be divided into important not important. Such as interactive processes in the foreground is very important and need to quickly return results. If the background data maintenance process, it does not matter later on.

So we add a priority level (usually an integer), for high-priority process our priority scheduling priority of the process after the implementation, re-scheduling of low priority processes for each process. Like io-intensive process obviously we should set a higher priority, after all, such processes only need to take a little bit of cpu time, after the work is handed io hardware. So we should give priority scheduling io-intensive process, so after a little bit of running time, the io-intensive process no longer needs the cpu. Therefore, priority should be lower cpu-intensive process than the io-intensive process.

But there is such a question: If we keep adding high-priority process (such as 2), but before we have a priority 1 because the process has been to seize the high-priority process it has never been run, this is certainly No Arab. Therefore, priority should not be fixed, for example, we can join the waiting time deciding factor, as the waiting time becomes longer, and we gradually increase its priority, such as delayed and eventually the process will be executed.

Priority scheduling algorithm will join the waiting time scheduling factor is actually a high priority response ratio scheduling algorithm : "(waiting time process execution time + process) response ratio = / process execution time," according to this formula to get the response ratio scheduling. High response in the same waiting time, the job execution time shorter than the first algorithm, the higher response ratio, satisfy the task priority segments, while the response increases as the waiting time becomes greater than the priority level will increase, can be prevented hunger. Advantage is that both the length of operation, calculate the response disadvantage is significantly more expensive than for the batch system.

Preemption.

 

 

Multilevel queue scheduling:

The internal scheduling algorithm to achieve a plurality of queues, queue 1 each process can run a time slice, each process can run 2 queue 2, queue 3 each process can run 4, each process can run queues 4 8 two, and so on.

Suppose there are n queues, then the first n-1 are FCFS scheduling queues, a last queue is scheduled RR.

For the need to perform a process 100 time slices, to join a queue, a time slice into the run queue 2. After an additional two time slices into the run queue 3. Eventually the process we need to 1 + 2 + 4 + 8 + 16 + 32 + 64, a total of seven context switches. If we use the above said RR algorithm that takes a full 100 times!

For different queues, the high-priority queue if there is process, then empty the priority high priority queue, then the situation after the second highest priority queue, and so on. .

This scheduling algorithm can be a good balance between long course and short course.

Steps:

1, in the process of entering the queue waiting to be scheduled, the first to enter the highest priority Q1 wait.
2, first scheduling process high-priority queue. If a high priority in the queue has no scheduling process, the process time priority queue scheduler. For example: Q1, Q2, Q3 three queues, when the process if and only if there is no waiting in Q1 to dispatch in Q2, the same token, the space-time scheduling Q3 will go only Q1, Q2 are.
3, the same queue for each process, the time slot scheduling assignment in accordance with FCFS. Such as time slice queue Q1 is N, then Q1 if the job has not been completed after a time slice N, the process proceeds to wait queue Q2, Q2 when the time slice is exhausted the job is not yet completed, proceed to the next has been stage of queues, until completion.
4, at the end of a queue in the respective QN process according to round-robin scheduling allocation of time slices.
5, in the process of low priority queue at run time, there are new job arrives, then they must immediately put the team back into the running process end of the current queue, then the processor distributed to high-priority process. In other words, any time, only when the first 1 ~ i-1 queues are empty, the implementation process will go to the i-th queue (preemptive). Special note, when re-run the process to the current queue, the last time slice allocated only has not been completed, are no longer assigned full time slice corresponding to the queue.

Preemption.

Guess you like

Origin www.cnblogs.com/FdWzy/p/12556498.html