Operating System Principles Chapter 5 CPU Scheduling

Operating System Principles for Undergraduate Students Learning Record
Learning Record Family Bucket

5.1 CPU Scheduling Concept

5.1.1 Long-range scheduling

Long-range scheduling:

  • Also known as job scheduling, advanced scheduling
  • "New" state changed to "Ready" state"
  • selected by the scheduler
  • Controls the "course/degree" Degree of multiprogramming

5.1.2 Mid-Range Scheduling

Mid-range scheduling:

  • also known as exchange
  • Swap processes in and out of memory and external storage space
  • Purpose: save memory space

5.1.3 Short-range scheduling

Every operating system has short-range scheduling

Short-range scheduling: the usual process scheduling

  • Also known as CPU scheduling, low-level scheduling
  • The scheduler chooses the next process to execute

scheduler dispatch scheduler scheduling

5.1.4 Process scheduling queue

Ready Queue: The collection of all processes that are ready in main memory and waiting to execute

Device queue: queue of processes waiting for an IO device

The execution process of the process is actually the migration of the process between various queues

For a single processor, there is at most because there is only one CPU

5.1.5 CPU Scheduling Process

Which one of the processes to run is determined by the scheduler, and the process of running the selected process is determined by the dispatcher

scheduler

  • Select a ready process according to some strategy
  • A CPU can only run one process at a time

Dispatcher

  • Responsible for transferring control of the CPU to the CPU scheduler
  • switch context
  • switch to user mode
  • Jump to the appropriate location in the user program and rerun

dispatch delay (Dispatch -

5.1.6 CPU Scheduling Mode

non-preemptive scheduling

Once the CPU is allocated to a process, the system cannot seize the allocated CPU and allocate it to other processes

Only when a process voluntarily releases the CPU can it allocate the CPU to other processes

Advantages: easy to implement, low scheduling overhead, suitable for batch processing systems

Disadvantages: long response time, not suitable for interactive systems (for example, more advanced programs cannot be interrupted,

preemptive scheduling

The scheduler can give control of the CPU to other processes

Advantages: It can prevent a single process from monopolizing the CPU for a long time

Disadvantages: high system overhead

5.1.7 CPU Scheduling Timing

The example of queuing up to buy tickets, from running to ready, happens when the leader buys tickets, you wait first, ready. Wait for the leader to finish buying, then you can buy a ticket

CPU scheduling can occur when a process:

  • Going from running to waiting (non-preemptive) is voluntary to give up the CPU
  • Interrupted from running to ready (preemptive)
  • Scheduling may or may not occur in the middle of going from waiting to ready (preemptive).
  • terminate (non-preemptive) voluntarily relinquish CPU

From waiting to ready (preemptive)

  • There is no process in the ready queue: at this time, the process in the waiting state will be scheduled immediately
  • There are processes in the ready queue, and the CPU is not empty: the priority of the process in the waiting state is very high.

5.1.8 CPU Scheduling Guidelines

basic indicators

CPU utilization: the proportion of CPU running time in a fixed period of time

Throughput: the number of processes running per unit of time

Turnaround time: the total time from submission to completion of the process

Waiting time: the total time slice of the process waiting for scheduling (not running, in the ready queue)

Response time: the time period from the process submission to the first run (not the output result), that is, the waiting time of the first period

Turnaround time = wait time + run time

response time <= wait time

Optimization

Maximum CPU Utilization

maximum throughput

shortest turnaround time

shortest waiting time

shortest response time

Solution

Scheduling algorithm: Which process in the ready queue is selected to run

Balance point, golden mean, there is no perfect algorithm

5.2 Scheduling algorithm

5.2.1 First come first serve algorithm FCFS

Processes are executed in the order they enter the ready queue

Waiting time definition: the sum of all times a process spends in the waiting queue, which is the turnaround time minus the running time

Response time: time period from process submission to first run

Algorithm features:

  • Simple implementation, can be implemented using FIFO queue
  • non-preemptive
  • fair
  • Suitable for long-range scheduling and short-range scheduling for background batch processing systems

shortcoming:

When there are multiple short processes behind a long process, letting the long process execute first will make the subsequent short processes wait for a long time, resulting in reduced CPU and device utilization.

5.2.2 Short Job First Algorithm SJF

Scheduling strategy:

Associated with the length of the CPU interval for each process to run next time, schedule the shortest process

Often used for job scheduling

has the shortest average wait time

have a hunger problem

non-preemptive scheduling

It means that once the process owns the CPU, it will only give up control of the CPU when the CPU pulse time ends

Turnaround time: P1: 7 P2: 10 P3: 4 P4: 11

Waiting time: P1:0 P2:6 P3:3 P4:7

Response time:

preemptive scheduling

When a process that is shorter than the remaining time slice of the current process arrives, the new process preempts the current process to get the CPU to run. This scheduling algorithm is also called the shortest remaining time priority scheduling, abbreviated as SRTF

5.2.3 Priority algorithm (PR)

Based on the urgency of the process, each process is assigned a corresponding priority externally, and the CPU is allocated to the highest priority process

Each process has a priority number, which is an integer

Default: Small priority number has high priority

The current mainstream operating system scheduling algorithm

Scheduling mode:

Preemptive

non-preemptive

advantage

  • Simple implementation, taking into account the urgency of the process
  • Flexible to simulate other algorithms

There is a problem

  • starvation - low priority processes may never get run

Solution

  • Aging – increases the priority number of a process as it waits longer

5.2.4 Time slice rotation (RR)

Designed for time-sharing systems, similar to FCFSbut with added preemption

  • Time slice: small unit of CPU time, typically 10-100 milliseconds

  • Allocate no more than one time slice of CPU to each process. After the time slice is used up, the process will be preempted and inserted at the end of the ready queue, and executed in a loop

  • Assuming that there are n processes in the ready queue and the time slice is q, the waiting time of any process will not exceed (n-1) * q

5.2.5 Multi-Level Queue Scheduling (MLQ)

Different types require different scheduling methods, resulting in multi-level queue scheduling

It is divided into many queues, and the scheduling method of each queue is different

element:

  • number of queues
  • Scheduling algorithm for each queue
  • Method to decide which queue a new process will enter

Reception – RR

Background – FCFS

5.2.6 Multi-Level Feedback Queue Scheduling (MLFQ)

Example:

Three queues:

  • Q0– time slice is 8 milliseconds
  • Q1– time slice is 16 milliseconds
  • Q2 – FCFS

Scheduling strategy

  • The new process enters Q0the queue and gets a CPU time slice of 8 milliseconds. If it cannot be completed within this time slice, it will be moved to the queue Q1;
  • Q0The process within uses FCFSthe algorithm;
  • The process gets a CPU time slice of 16 milliseconds in Q1the queue, and if it cannot be completed, it will be moved to the queue Q2;
  • Q1The process within uses FCFSthe algorithm;
  • The incoming Q2process will FCFSrun through the algorithm in one go

5.2.7 Multiprocessor Scheduling

MOOC unit work

There is an operating system that uses multi-level feedback queue scheduling, as shown in the following figure. Among them, the first level adopts the time slice rotation algorithm, and the time slice size is 8ms; the second level also adopts the time slice rotation algorithm, and the time slice size is 16ms; the third level adopts the first come, first serve algorithm.

[External link picture transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the picture and upload it directly (img-RUIhPdwR-1641363910587) (E:\Documents and PPT\Junior Course Study\Operating System\Pictures\Fifth Chapter \ Unit 5 Homework_01.png)]

Answer the following questions according to the arrival time and execution time of the five processes given in the table below. (time in milliseconds)

process execution time Time of arrival
P1 50‌ 0‌
P2 10‌ 1‌
P3 5‌ 2‌
P4 30‌ 3‌
P5 23‌ 4‌

(1) Please draw a Gantt chart of 5 process executions.

(2) According to the above scheduling algorithm, calculate the turnaround time and response time of each process respectively.

[External link picture transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the picture and upload it directly (img-0InnXzDX-1641363910589) (E:\Documents and PPT\Junior Course Study\Operating System\Pictures\Fifth Chapter\MOOC Unit 5 Homework_01.png)]

2. What is preemptive scheduling? What is non-preemptive scheduling? What occasions are they suitable for?

‏Preemptive scheduling: The scheduler can suspend an executing process according to a certain principle, and reallocate the CPU allocated to it to another process

Non-Preemptive Scheduling: When a resource (CPU) is allocated to a process, the process will hold the CPU until it terminates or reaches a wait state.

Applications:

Preemptive scheduling is suitable for interrupt request occasions

Non-preemptive scheduling is suitable for FCFS scheduling, IO requests, etc.

3. Consider the following scheduling algorithm based on priority (high priority and low priority). This algorithm uses a dynamic aging algorithm for priority numbers based on waiting time and running time. The specific algorithm is as follows:

a) The priority number p of the process in the waiting queue changes according to the waiting time t (calculated every millisecond), p=p-1;

b) The priority number p of the running process changes according to the running time t (calculated every millisecond), p=p+1;

c) The priority number p is recalculated every 1 millisecond;

d) Adopt preemptive scheduling strategy.

Answer the following questions according to the arrival time and execution time of the five processes given in the table below. (The time is in milliseconds. When the priority is the same, the process that enters the ready queue first takes priority)

process execution time reach time‌ priority p‌
P1 5‌ 0‌ 8‌
P2 6‌ 1‌ 4‌
P3 3‌ 2‌ 6‌
P4 4‌ 3‌ 2‌
P5 2‌ 4‌ 10‌

(1) Please draw a Gantt chart of 5 process executions.

(2) According to the above scheduling algorithm, calculate the turnaround time and response time of each process respectively.

[External link picture transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the picture and upload it directly (img-Z5snRTMG-1641363910589) (E:\Documents and PPT\Junior Course Study\Operating System\Pictures\Fifth Chapter\MOOC Unit 5 Homework_03.png)]

4. Compare the differences between job scheduling and process scheduling

(1) Job scheduling is macro scheduling, which determines which job can enter the main memory. Process scheduling is micro-scheduling, which determines which process in each job occupies the central processing unit.

(2) Job scheduling is to select qualified jobs in the containment state and load them into the memory. Process scheduling is to select an occupied processor from the ready state process.

5. Consider the following preemptive scheduling algorithm based on dynamic priority. A large priority number means high priority. When a process is waiting for the CPU (in the ready queue, but not executing), the priority number changes at a rate α; while it is running, the priority number changes at a rate β. All processes are given a priority number of 0 when they enter the ready queue. Excuse me:

1) What is the algorithm when β>α>0? Why?

‎FCFS algorithm, first come, first served.

Because the rate β when the process is running is greater than the rate change in the ready queue, it is 0 in the ready queue at the beginning, but the priority of the process that starts running first changes faster, and the process in the ready queue cannot preempt. You can only wait for the running process to end before running the next one.

​ Ready queue priority running process priority

0 a 2a 3a… (n-1)a nb

2) What is the algorithm when α<β<0? Why?

LIFO algorithm, last-in-first-out algorithm.

Because the priority of the latest process that comes to the ready queue is 0, it is always greater than the running process (because the process is running or waiting, the priority is decreasing), so the latest process is always executed first. Even if no new process arrives, the priority of the process at the end of the queue is higher.

Ready Queue Priority Running Process Priority

a 2a 3a… (n-1)a + b 0

Guess you like

Origin blog.csdn.net/weixin_45788387/article/details/122322756