Operating system 5 - input and output system

This series of blogs focuses on sorting out the core content of the operating system course of Shenzhen University, and the bibliography "Computer Operating System" (if you have any questions, please discuss and point out in the comment area, or contact me directly by private message).


 

synopsis

This blog mainly introduces the relevant knowledge of the input and output system in Chapter 6 of the operating system.

Table of contents

1. I/O (input output) system

1 Overview

2. I/O devices and device controllers

3.I/O path

4. I/O control method

2. Buffer management

3. Equipment allocation

4. I/O software at the user level - Spooling system

5. Disk storage management

1 Overview

1.1 Structure and data

1.2 Disk access time

2. Disk scheduling algorithm

2.1 First come first served (FCFS)

2.2 Shortest Seek Time First (SSTF)

2.3 Scan Scheduling Algorithm (SCAN)

2.4 Cyclic scan scheduling algorithm (CSCAN)

2.5 Example of disk scheduling algorithm

3. Additional knowledge


1. I/O (input output) system

The main objects of the input/output system (Input/output) system management are I/O devices and corresponding device controllers. The main function is to complete the I/O requests made by users, increase the I/O rate, and improve the utilization of devices .

1 Overview

The I/O software involves a wide range of aspects. It is closely related to the hardware downwards and directly interacts with the file system, virtual memory system and users upwards. Today's mainstream solution is a hierarchical I/O system, which uses lower-level services to complete input . Subfunctions that export functions and provide services to upper layers .

Among them, the hierarchical view of each module of the I/O system is as follows:

2. I/O devices and device controllers

I/O devices are generally composed of mechanical parts that perform I/O operations and electronic components that perform control I/O. The former is an I/O device, and the latter is a device controller or adapter . The controllers in microcomputers and minicomputers are often made in the form of printed circuit cards, so they are often called control cards, interface cards or network cards, which can be inserted into the expansion slot of the computer. In some large and medium-sized computer systems, I/O channels or I/O processors are also configured.

There are many classifications of I/O devices, such as:

  • Classified by usage characteristics: ①Storage device (external memory: large capacity, slow speed) ②I/O device (input/output/interactive device, keyboard mouse scanner/display)
  • Classified by transmission rate: ① low-speed equipment ② medium-speed equipment ③ high-speed equipment 

Typically, devices do not communicate directly with the CPU, but through a device controller.

The main function of the device controller is to control one or more I/O devices and facilitate data exchange between the I/O devices and the computer (CPU) . The device controller consists of the following:

3.I/O path

Although the device controller is added between the CPU and the I/O device, the CPU's intervention on the I/O can be greatly reduced, but when the host is configured with many peripherals, the burden on the CPU is still heavy. For this reason, an I/O channel (I/O Channel) is added between the CPU and the device controller . Its main purpose is to establish independent I/O operations.

The I/O channel is a special processor that has the ability to execute I/O instructions, and controls the I/O operation by executing the channel (I/O) program .

1. The instruction type is single, mainly limited to instructions related to I/O operations

2. Without its own memory, the channel program is placed in the host memory

Therefore, there will be a "bottleneck" problem in the I/O channel. The high price of the channel leads to a small number, which limits the I/O operation and reduces the system throughput . As shown in the figure below, in order to start device 4, it is necessary to start channel 1 and controller 2. If it is already occupied by other devices, it will fail to start. 

The main solution is a multi-channel I/O system , as follows:

Not only solve the "bottleneck" problem, but also improve the reliability of the system.

4. I/O control method

For the control method of I/O devices, the development process is mainly from polling programs to interrupts to DMA controllers to channels . The core purpose of the development is to reduce the host's intervention in I/O control so that it can complete more data processing tasks .

  • Polling programmable I/O mode: when inputting and outputting, the busy status bit is set to 1, and the test is completed in a continuous cycle, resulting in a great waste of CPU.
  • Programmable I/O mode using interrupts: CPU and I/O devices work in parallel, and the CPU spends a small amount of time to interrupt each time after inputting data.
  • Direct memory access mode: The interrupted I/O mode is intervened in units of bytes (bytes), which is extremely inefficient for block devices. Therefore, the DMA controller is introduced, and the composition is as follows:

  • I/O channel control method: reduce the read and write intervention on a data block to the read and write intervention on a group of data blocks.  

The core implementation of the I/O channel control method is to complete the control of I/O devices through the channel program. The channel program generally includes the following information:

(1) Operation: read, write (2) P: Channel end bit: P=1 means this command is the last command of the channel program (3) R: Record end bit: R=1 means this is the last command to process a certain record One instruction (4) Count: Indicates the number of bytes to be read and written by this instruction (5) Memory address: Indicates the first address where characters are sent into the memory

The above example contains three records, 1-3 instructions are one record, 4 is one, and 5-6 is one (see R).

2. Buffer management

In modern operating systems, almost all I/O devices use a buffer when exchanging data with the CPU. It is essentially a storage area, generally composed of hardware registers or memory (more commonly). 

The main reasons for the introduction of buffering are as follows:

  • Alleviate the speed mismatch between CPU and I/O devices: producers can output data to buffers without waiting for consumers to be ready
  • Reduce the interrupt frequency of the CPU and relax the restriction on the CPU interrupt response time. In the following example, for (a) interrupt and respond once every 100us, (b) the interrupt frequency can be reduced to 1/8, and (c) can be Reduce response time to 1/8

  • Increased parallelism between CPU and I/O devices

1. Single buffer: 

One I/O request one buffer

Run cycle: Max(C, T) + M

2. Double buffer: 

If the consumer does not take the buffer data, the producer cannot put in the new data, so a double buffer is introduced.

Operating cycle: Max(C+M, T)

C+M<T: The host is fast, the host waits, and the disk continuously inputs

3. Equipment allocation

In order to realize the allocation of exclusive equipment, the system must configure the corresponding data structure—— Device Control Table (DCT) .

The comparison between controller control table, channel control table and system equipment table is as follows:

The device allocation process for a single-channel system is as follows:

4. I/O software at the user level - Spooling system

In user-level I/O software, a spooling system that runs entirely outside the kernel is required. Through the spooling system (technology), one physical I/O device can be virtualized into multiple logical I/O devices, which can be shared by multiple users

The core of SPOOLing technology is two processes in the system that are responsible for I/O, simulating the function of I/O peripherals, and realizing (false) offline input/output.

Its system composition is as follows:

1. Input and output wells:

Two large storage spaces opened up on disk:

  • The input well is a disk for simulating offline input, which is used to store data input by I/O devices
  • The output well is a disk for simulating offline output, which is used to store the output data of the user process 

2. Input buffer and output buffer: 

In order to alleviate the contradiction between the speed mismatch between the CPU and the disk, two buffers are opened in memory: 

  • The input buffer is used to temporarily store the data sent by the input device before sending it to the input well
  • The output buffer is used to temporarily store the data sent by the output well before sending it to the output device

3. Input process SPi and output process SPo: 

  • The input process SPi simulates the peripheral control machine during offline input, and sends the data input by the user from the input device to the input well through the input buffer
  • The output process SPo simulates the peripheral control machine during offline output, and sends the output data required by the user from the output well to the output device through the output buffer

The system features are as follows:

  • Increase I/O speed
  • Transform exclusive devices into shared devices
  • Realized virtual device function 

5. Disk storage management

Disk storage is the most important storage device in a computer system, in which a large number of files are stored, and reading and writing of files involves access to the disk.

1 Overview

1.1 Structure and data

The disk structure is as follows:

1. The composition of the disk:

  • A disk is made up of multiple platters
  • Each disk is divided into two disks
  • Each disk is divided into several tracks (concentric circles)
  • Each track is divided into several sectors

2: Disk addressing: head - cylinder - sector

  • Head Head: the front or back of the first disk
  • Cylinder Cylinder: which track
  • Sector: the partition number on the track 

The sector (Sector) data structure is as follows, mainly including (1) identifier field (ID Field), (2) data field (Data Field)

There are two types of disks:

  • Fixed-head disks have one read/write head per track , all mounted in a rigid arm. Magnetic head parallel read/write, fast I/O speed, used for large-capacity disk
  • Moving head disk, each disk is equipped with only one head , the head can move to seek. The I/O speed is slow, the structure is simple, and it is widely used in small and medium disk devices 

1.2 Disk access time

In order to read or write, the magnetic head must move to the specified track, and wait for the specified sector to rotate under the magnetic head, and then read or write data, so the disk access time can be divided into three parts:

1. Seek time Ts: the time for the head to move to the specified track

The sum of the start time s of the magnetic arm and the time it takes for the head to move n tracks             

Ts=m×n+s

Tips: m is a constant, which is related to the speed of the disk drive. For general disks, m=0.2; for high-speed disks, m≤0.1. The start-up time s of the magnetic arm is about 2 ms. General seek time 5~30 ms

2. Rotational delay time Tτ: the time it takes for the sector to move below the magnetic head

5400-rpm hard disk, that is, 5400 r/min, each revolution takes 11.1 ms, and the average rotation delay time Tτ is 5.55 ms

3. Transmission time Tt: Data is read from or written to the disk

It is related to the number of bytes read/written each time b and the rotation speed, r is the number of revolutions per second of the disk; N is the number of bytes on a track 

If you know the number of sectors, Tt = 1/r*number of sectors 

4. Total access time Ta:

When the number of bytes read/written at a time is equivalent to the number of bytes on half a track, the total time:

Tips: The proportion of transmission time is low 

 example

If the disk speed is 7200 rpm, the average seek time is 8ms, and each disk

Operating system--disk scheduling topic_If the disk speed is 6000 rpm, each track contains 1000 sectors_real_metrix's blog-CSDN blog

2. Disk scheduling algorithm

1. Methods to improve disk I/O speed:

Improve disk hardware performance Use a good scheduling algorithm to set the disk high-speed buffer

2. Disk scheduling:

Disks are shared devices that allow multiple processes to access, so disk scheduling algorithms are required

The goal of the disk scheduling algorithm is to reduce the average seek time

2.1 First come first served (FCFS)

Core: Scheduling according to the order in which the process requests to access the disk

  • Advantages: simple, each request can be processed in turn
  • Disadvantages: the average seek distance is large

2.2 Shortest Seek Time First (SSTF)

Core: Based on the principle that the access track is closest to the current track (actually priority-based scheduling)

Tips: The current track is 100 

  • Advantage: average seek time is shorter
  • Disadvantages: It will cause "starvation" in some processes, and the magnetic head may stay on the same track for a long time (magnetic arm sticking)

2.3 Scan Scheduling Algorithm (SCAN)

Core: According to the two principles of the moving direction of the magnetic head and the shortest distance between the access track and the current track

The moving direction of the magnetic head is..., outward, then inward, then outward, ... always repeating. For example: the current head is stopped at track 80, and the request for track 89 has just been completed, indicating that the moving direction of the magnetic head is inward.

Tips: The current track is 100, and the direction is outward 

  • Advantages: There will be no process "starvation" phenomenon, and the average access seek time is shorter
  • Disadvantages: The track that is close to the head but in the opposite direction of head movement has a long waiting time, and the head may stay on the same track for a long time (magnetic arm sticking) 

2.4 Cyclic scan scheduling algorithm (CSCAN)

Core: After reaching the outermost track, return to the smallest track and start the SCAN algorithm

Tips: The current track is 100, and the direction is outward  

  • Advantages: There will be no process "starvation", the average access seek time is shorter, and the longest waiting time is shorter (half) than SCAN
  • Disadvantage: The head may stay on the same track for a long time (magnetic arm sticking)

2.5 Example of disk scheduling algorithm

There is a disk management system, the tracks are numbered in ascending order from inside to outside, assuming that the current request sequence waiting to access the disk is: 15, 10, 30, 150, 190, 80, 95, 40, 140, 20. The current head is stopped at track 90, and the access to track 93 has just been completed before. Please use FCFS, SSTF, SCAN, and CSCAN algorithms respectively to find the scheduling sequence and average seek distance of each algorithm

Operating system - use FCFS, SSTF, SCAN, and CSCAN algorithms to calculate the total seek length and average seek length examples (detailed) - Programmer Sought

FCFS, SSTF, SCAN, CSCAN_scan Algorithm Example Explanation of OS Disk Scheduling Algorithm

3. Additional knowledge

1. Disk Cache:

Use the storage space in the memory to temporarily store the information in a series of disk blocks read from the disk.

A cache is a set of disk blocks that logically belong to disk but physically reside in memory.

2. Two forms of cache in memory:

Open up a separate storage space in the memory as a disk cache, its size is fixed, not affected by the number of applications

Turn all unused memory space into a buffer pool shared by the request paging system and disk cache

3. Other ways to increase disk speed:

Read ahead: According to the principle of locality, read adjacent disk blocks into memory in advance, such as pre-paging strategy

Delayed writing: the modified pages are not written back to the disk immediately, and a certain amount is accumulated to be written back at one time, reducing the number of I/O operations

Optimize the distribution of physical blocks: the physical blocks of the same file should be concentrated as much as possible

Virtual Disk: Use memory space or other storage media to emulate disks, such as memory hard disks and solid state disks

Guess you like

Origin blog.csdn.net/weixin_51426083/article/details/131458085
Recommended