[Notes] Operating System (5)-CPU Scheduling

Preface

This is a very important chapter in the operating system, and the question types are also very classic.

Basic concepts of CPU scheduling

CPU-I/O interval cycle

Process execution consists of CPU execution and I/O wait cycles , and the process switches between these two states. Process execution starts from the CPU interval, after which is the I/O interval, then another CPU interval, and then another I/O interval, and so on, until the CPU interval is terminated by the system request.
Insert picture description here

CPU scheduler

The CPU scheduler is the short-term scheduler mentioned before : the scheduler selects a process that can be executed from the memory and allocates CPU to it.

This part is equivalent to review

The ready queue does not have to be a first-in first-out queue. As you will see when studying various scheduling algorithms, ready queues can be implemented as FIFO queues, priority queues, trees, or simple unordered linked lists. All processes in the ready queue must be queued to wait to run on the CPU. The records in the queue are usually process control blocks (PCB).

Preemptive scheduling

The scheduling scheme can be non-preemptive or preemptive.

The CPU scheduling decision can occur in the following 4 environments:

  1. When a process switches from the running state to the waiting state (I/O request, or call wait() to wait for the termination of the next process).
  2. When a process switches from the running state to the ready state (when an interruption occurs).
  3. When a process switches from the waiting state to the ready state (I\O complete).
  4. When a process terminates.

When scheduling occurs in the first and fourth situations, it is called non-preemptive ; when scheduling occurs in the second and third situations, it is called preemptive .

Dispatch procedure

Dispatcher: Dispatcher is a module used to transfer control of the CPU to a process selected by the short-term scheduler.

Features:

  1. Switch context
  2. Switch to user mode
  3. Jump to a suitable location in the user program to restart the program.

CPU scheduling guidelines

  • CPU usage: The CPU needs to be as busy as possible.
  • Throughput: The number of processes completed in a unit of time.
  • Turnaround time: The time period from the submission of the process to the completion of the process is called the turnaround time. The turnaround time is the sum of all time periods, including waiting to enter the memory, waiting in the ready queue, executing on the CPU, and I/O executing. Turnaround time = waiting time + execution time = end time-time to enter the queue.
  • Waiting time: The waiting time is the sum of the time spent waiting in the ready queue.
  • Corresponding time: For interactive systems, turnaround time is not the best criterion. The time from submitting the request to generating the first response is called response time.

CPU scheduling algorithm

Top priority of this chapter

1. First-come, first-served, FCFS

For ready queues, first-come, first-served can be simply implemented using FIFO queues.
Insert picture description here
Disadvantages of FCFS: The average waiting time of using FCFS is longer. When a large process is executed, all processes have to wait for the large process to release the CPU, which is called the convoy effect, which will cause the CPU and device usage to become very low.

2. Shortest-job-first scheduling (shortest-job-first, SJF)

SJF associates each process with its next CPU interval segment. When the CPU is idle, it will be assigned to the process with the shortest CPU interval. If two processes have the same length, they can be processed using FCFS scheduling.

SJF features:

  • The average waiting time of the SJF scheduling algorithm is the smallest, and the throughput of the system is also increased.
  • Conducive to short processes, not conducive to regular processes, long processes may cause starvation (also called infinite blocking, refers to processes that can run but lack CPU).
  • The real difficulty of SJF is that it is difficult to know the length of the next CPU interval, so it is difficult to implement.
  • SJF implemented using approximate algorithms is often used for long-term scheduling.

SJF approximation algorithm: The next interval is usually predictable as the exponential average of the measured length of the previous CPU interval .
Insert picture description here

Non-preemptive SJF

Insert picture description here

Preemptive SJF

Insert picture description here

3. Priority scheduling algorithm

Priority scheduling: SJF algorithm can be used as a special case of general priority scheduling algorithm. Each process has a priority associated with it, and the process with the highest priority is assigned to the CPU. Processes with the same priority are scheduled in FCFS order. The SJF algorithm is a simple priority algorithm, and its priority is the inverse of the predicted CPU interval. The larger the CPU interval, the lower the priority. (In the author's version of the textbook, small numbers indicate high priority)

Disadvantages and solutions of priority scheduling:

Disadvantages : Like the SJF algorithm, priority scheduling can also cause starvation problems, and processes with low priority may never wait for CPU allocation.

Solution: One of the solutions to the infinite waiting problem of low-priority processes is aging : As the waiting time increases, the priority of the processes that have a long waiting time in the system is gradually increased.
Insert picture description here

4. Round-robin scheduling (round-robin, RR)

The round-robin method is similar to FCFS, but preemption is added to switch processes. The mechanism of preemption is to define a smaller time unit called a time slice, and when the time slice allocated to a process ends, the process is switched. If the process ends within the time slice, the excess time slice is recycled, and then the process is switched.
Insert picture description here

Context switching problem

The performance of the RR algorithm largely depends on the size of the time slice. In extreme cases, if the time slice is very large, the RR algorithm is the same as the FCFS algorithm. If the time slice is small, then the RR algorithm is called processor sharing. At the same time, the impact of context switching on RR scheduling must also be considered.
Insert picture description here
According to experience, 80% of the CPU interval should be less than the time slice.

5. Multi-level queue scheduling

For example, for the foreground process and background process. These two different types of processes have different response time requirements and different scheduling needs.

Multi-level queue scheduling algorithm: divide the ready queue into multiple independent queues. According to the attributes of the process, such as memory size, priority, process type, a process is permanently allocated to a queue. Each queue has its own scheduling algorithm.
Insert picture description here

6. Multi-level feedback queue scheduling

Different from the multi-level queue scheduling algorithm, after the process is assigned to one queue in the multi-level feedback queue, it can also be transferred to other queues. Transferring in the queue is also an aging realization.
Insert picture description here

Multiprocessor scheduling

Compared with single-processor scheduling, multi-processor scheduling is more complicated. At present, like the CPU scheduling algorithm in a single-processor, there is no best solution.

Asymmetric multiprocessing: Let one processor handle all scheduling decisions, I/O processing and other activities, and the other processors only execute user code.

Symmetric multiprocessing: each processor schedules itself. All processes may be in a common ready queue, or each processor has its own private ready process queue.

Processor affinity: Try to make a process run on the same processor and avoid transferring processes between processors.

Load balancing : Try to make the workload evenly distributed to all processors in the SMP system.

Symmetric multi-threading: In Intel processors, it is also not called hyper-threading technology. The idea of ​​SMT is to generate multiple logical processors on the same physical processor and present a view of multiple logical processors to the operating system. Each logical processor has its own architectural state, including general purpose and machine state registers. Furthermore, each logical processor is responsible for its own interrupt handling, which means that the interrupt is sent to the logical processor, not the physical processor.

Thread scheduling

One of the differences between user threads and kernel threads is how they are scheduled.

**Process-contention scope (PCS): ** Refers to the thread library scheduling user-level threads to run on a valid LWP. And CPU competition occurs between threads belonging to the same process.

**System-contention scope (SCS): ** When the thread library schedules a user thread to a valid LWP, it does not mean that the thread is actually running on the CPU. It requires the operating system to schedule the kernel thread to On the CPU, the range of system competition.

Guess you like

Origin blog.csdn.net/qq_41882686/article/details/112578869