Operating System Notes - Device Management

5. Device management

5.1 I/O Management Overview

5.1.1 Classification of I/O devices and tasks of I/O management

Classification of I/O devices

  • Classification by equipment usage characteristics

    • storage device. It is a device used by computers to store various information, such as disks, tapes, etc.
    • Human-computer interaction equipment. It is a device for interaction between the computer and the computer user. It is used to transmit information to the CPU or output information processed by the CPU. For example, the keyboard is the input device, and the monitor and printer are the output devices.
    • Network communications equipment. Devices used to communicate with remote devices, such as various network interfaces, modems, etc.
  • Classified by information exchange unit

    • Character device. The basic unit for processing information is characters , such as keyboards, printers, and monitors.
    • block device. The basic unit of processing information is the character block . Generally, the size of a character block is 512B~4KB. For example, the disk is a block device.
  • Classified by transmission rate

    • Low speed equipment. It refers to a type of device whose transmission rate is only a few bytes to hundreds of bytes per second, such as keyboards and mice.
    • Medium speed equipment. It refers to a type of equipment with a transmission rate of several kilobytes to tens of thousands of bytes per second, such as line printers, laser printers, etc.
    • High speed equipment. It refers to a type of equipment with a transfer rate of hundreds of thousands of bytes to ten megabytes per second, such as tape drives, disks, etc.
  • Classification by shared attributes of devices

    • Exclusive device. It refers to a device that only one process can use at the same time and is a critical resource. Once the system allocates such a device to a process, it will be exclusively occupied by the process until it is used up and released. Most low-speed devices are exclusive devices, such as printers.
    • Shared equipment. It refers to a device that allows multiple processes to access it. For example, a disk is a very typical shared device. It allows several processes to alternately read and write information. Of course, at a time, a device only allows one process to access it.
    • Virtual device. It refers to a device that uses virtual technology to allow an exclusive device to be logically used by multiple processes at the same time. For example, a printer using virtual technology can send printing information to the printer at the same time, just like there are multiple printers.

I/O management tasks and functions

The main task of device management is to complete the I/O requests made by users, allocate I/O devices to users, improve the utilization of I/O devices, and facilitate users to use I/O devices. In order to complete the above tasks, device management should have the following functions.

  • Equipment allocation

    Determine which process the I/O device is allocated to according to the device type and the corresponding allocation algorithm. If there are device controllers and channels between the I/O device and the CPU, the corresponding device controllers and channels must also be allocated to ensure that there is a path for transmitting information between the I/O device and the CPU. Processes that are not allocated the required device should be placed in a waiting queue. In order to realize device allocation, some data structures should be set up in the system to record the status of the device.

  • Equipment handling

    Device handlers are used to implement communication between the CPU and the device controller . When performing I/O operations, the CPU issues I/O instructions to the device controller to start the device for I/O operations. When the I/O operation is completed, it can respond and process the interrupt request sent by the device in a timely manner.

  • Buffer management

    The purpose of setting the buffer is to alleviate the contradiction between the speed mismatch between the CPU and the I/O device . The buffer management program is responsible for completing the buffer allocation, release and related management work.

  • device independence

    Device independence, also known as device independence, means that the application is independent of the physical device. Users should try to avoid using actual device names directly when programming applications. If the actual device name is used in the program, the user program cannot run when the device is not continuously in the system or the device fails. If you want to run this program, you need to modify the program. If a user program does not refer to an actual device but uses a logical device, the input/output it requires is independent of the physical device. Device independence can improve the adaptability of user programs .

5.1.2 I/O control mode

Equipment generally consists of mechanical parts and electronic parts . The electronic part of the equipment is usually called the equipment controller.The device controller is located between the CPU and I/O devices. It receives commands from the CPU and controls the work of the I/O devices, freeing the processor from complex device control transactions.. A device controller is an addressable device. When it only controls one device, it has only one device address; when it can connect multiple devices, it should have multiple device addresses.

The device controller should have the following functions :

  • Receive and recognize various instructions from the CPU
  • Realize data exchange between CPU and device controller, device controller and device
  • Record the status of the device for CPU query
  • Identify the address of each device controlled
  • Buffer the data output by the CPU or the data input by the device to the CPU
  • Error control of input/output data

Most device controllers are composed of three parts: the interface between the device controller and the processor, the interface between the device controller and the device, and I/O logic.

Insert image description here

There are usually four types of I/O control methods :

direct program control

In early computer systems, there was no interrupt system, so when the CPU and I/O devices communicate and transmit data, the CPU needs to continuously test the I/O device because the CPU is much faster than the I/O device. This control method is also called polling or busy.

Insert image description here

Taking data input as an example, when the user process needs to input data, the processor issues an I/O instruction to the device controller to start the device for input. While the device is inputting data, the processor continuously detects the value of the device status register by executing test instructions in a loop . When the value of the status register shows that the device input is completed, the processor takes out the data in the data register and sends it to the specified memory unit, and then Then start the device to read the next data. On the contrary, when the user process needs to output data to the device, it must also issue a startup command to start the device output and wait for the output operation to be completed.

  • advantage. The working process of direct program control mode is very simple.
  • shortcoming. CPU utilization is quite low. Because the I/O device is too slow and cannot keep up with the CPU, the CPU spends most of its time testing whether the I/O device has completed data transmission, resulting in a huge waste of CPU.

Interrupt control mode

In order to reduce the CPU waiting time in the direct program control mode and improve the degree of parallel work between the CPU and the device, the interrupt control mode is widely used in modern computer systems to control IO devices.

Insert image description here

Taking data input as an example, when the user process needs data, the CPU issues a startup command to the device controller to start the peripheral to input data. While data is being entered, the CPU can do other work. When the input is completed, the device controller sends an interrupt signal to the CPU. After receiving the interrupt signal, the CPU proceeds to execute the device interrupt handler . The device interrupt handler transfers the data in the input data register to a specific memory unit for use by the process requesting input, and then starts the device to read the next data.

  • advantage. Compared with direct program control,With the hardware support for interrupts, the CPU and I/O devices can work in parallel. The CPU only needs to process the interrupt signal after receiving it, which greatly improves the CPU utilization.
  • shortcoming. There are still some problems with this control method. For example, each device requires interrupting the CPU every time it inputs/outputs a piece of data. In this way, there are too many interrupts during one data transfer, which consumes a lot of CPU time.

The processing process of the interrupt handler (only refers to the interrupt issued when I/O is completed) is as follows :

  • Wake up the blocked driver (program) process: You may use signal operation or send a signal to wake up the blocked driver (program) process.
  • Protect the CPU environment of the interrupted process: push the processor status word PSW and program counter PC into the stack for storage. Other things that need to be pushed into the stack for protection include the CPU registers, etc. These are all completed by hardware.
  • Jump to the desired device handler: test the interrupt source to determine the device number that caused the interrupt.
  • Interrupt handling: Call the corresponding interrupt handler for the device.
  • Restore the scene of the interrupted process: pop out the data such as the protected memory that was pushed into it at that time, and restore the current CPU execution context.

DMA control method

The basic idea of ​​DMA control method is to open up a direct data exchange path between peripherals and memory. In the DMA control mode, the device controller has stronger functions. Under its control, data can be exchanged between the device and the memory in batches without CPU intervention . This not only greatly reduces the burden on the CPU but also greatly increases the I/0 data transmission speed. This method is generally used for data transmission on block devices.

Insert image description here

Still taking data input as an example, when the user process needs data, the CPU will send the starting address of the memory where the input data is stored and the number of bytes to be transferred to the memory address register and transfer byte counter in the DMA controller respectively. And start the device to start data input. While data is being entered, the CPU can do other things . The input device continuously appropriates the CPU work cycle and writes the data in the data register into the memory continuously until all the data required to be transferred is completed. The DMA controller sends an interrupt signal to the CPU when the transfer is completed. After receiving the interrupt signal, the CPU transfers to the interrupt handler for execution, and returns to the interrupted program after the interrupt ends .

The characteristics of the DMA control method are: the basic unit of data transmission is a data block. Data is transmitted in one direction and is sent directly from the device to the memory or vice versa; CPU intervention is only required at the beginning and end of transmitting one or more data blocks. , the transmission of the entire block of data is completed under the control of the controller .

The main difference between DMA control mode and interrupt control mode isThe interrupt control mode interrupts the CPU after each data transfer is completed, while the DMA control mode interrupts the CPU only when all the batches of data required to be transferred have been transferred; the data transfer in the interrupt control mode is completed under the control of the CPU during interrupt processing. , and the DMA control method is completed under the control of the DMA controller

The DMA controller mainly includes 4 types of registers, which are used for exchanging block data between the host and the controller :

  • Command/Status Register (CR): Used to receive I/O commands or related control information sent from the CPU, or the status of the device.
  • Memory Address Register (MAR): The memory address used to transfer from memory to internal or from memory to device.
  • Data register (DR) is used to store data from the device to the memory or from the memory to the device.
  • Data counter (DC): stores the number of words to be transmitted this time.

advantage. In DMA control mode, the device and CPU can work in parallel, and the data exchange speed between the device and the device is accelerated without CPU intervention.

shortcoming. The DMA control method still has certain limitations. For example, the direction of data transmission, the starting address of the memory to store the input data, and the length of the transmitted data are all controlled by the CPU, and each device requires a DMA controller. When the number of devices increases, The use of multiple DMA controllers is also uneconomical.

Channel control mode

The channel control method is similar to the DMA control method. It is also a memory-centered control method that realizes direct exchange of data between the device and the memory.Compared with the DMA control method, the channel requires less CPU intervention, and one channel can control multiple devices, further reducing the CPU burden.. The channel is essentially a simple processor. It is independent of the CPU. It has operation and control logic, has its own instruction system, and also works under program control. It is specifically responsible for input and output control, and has the ability to execute I/O instructions. Control I/O operations by executing channel I/O programs.

Different from the CPU, the channel has a single instruction type. This is because the channel hardware is relatively simple. The commands it can execute are mainly limited to instructions related to I/O operations, and the channel does not have its own memory. The channels executed by the channel The program is placed in the memory of the host. In other words, the channel and the CPU share memory .

  • byte multiplex channel

    Byte multiplexing is used to connect multiple slow and medium speed devices where data is transferred in bytes. It takes a long time to transmit each byte , such as terminal equipment, etc. Therefore, a channel can serve multiple peripherals in turn in a byte-interleaving manner to increase channel utilization. The data width of this channel is generally single byte .

    Insert image description here

  • Array selection channel

    Byte multiplex channels are not suitable for connecting high-speed devices, which promotes the formation of array-select channels for data transfer in arrays.Although this kind of channel can connect multiple high-speed devices, because it only contains one distribution sub-channel, it can only execute one channel program within a period of time to control one device for data transmission. As a result, when a certain device occupies the channel After that, it has been exclusively occupied by it. Even if there is no data transmission and the channel is idle, other devices are not allowed to use the channel until the device completes the transmission and releases the channel.. It can be seen that the utilization rate of this channel is very low.

  • array multichannel

    Although the array selection channel has a high transfer rate, it only allows one device to transfer data at a time. The array multi-channel channel is a new channel formed by combining the advantages of the high transmission rate of the array selection channel and the time-sharing parallel operation of each sub-channel (device) of the multi-byte multi-channel channel .It contains multiple unallocated sub-channels, so this channel can achieve high data transfer rates while achieving satisfactory channel utilization.. It is precisely for this reason that this channel can be widely used to connect multiple high- and medium-speed peripheral devices, and its data transmission is performed in an array manner .

The I/O channel method is a development of the DMA method, which further reduces the CPU's involvement in the control of data transmission, that is,Reduce the intervention of reading (or writing) a data block into the reading (or writing) of a group of data blocks and related control and management.. At the same time, parallel operations of CPU, channels and I/O devices can be realized, thereby more effectively improving the resource utilization of the entire system.

Taking data input as an example, when a user process needs data, the CPU issues a startup command to specify the I/O operation to be performed, the device and channel used. When the corresponding channel receives the start command from the CPU, it reads the channel program stored in the memory, executes the channel program, and controls the device to transfer data to the designated area in the memory. While the device is taking input, the CPU can do other work. When the data transfer ends, the device controller sends an interrupt request to the CPU. After receiving the interrupt request, the CPU transfers to the interrupt handler for execution. After the interrupt ends, it returns to the interrupted program.

  • advantage. The channel control method solves the independence of I/O operations and the parallelism of the work of each component. Channels free the CPU from cumbersome input/output operations. After adopting channel technology, not only can the parallel operation of the CPU and the channel be realized, but also the parallel operation between channels can be realized. The peripherals on each channel can also be operated in parallel, thus achieving the fundamental purpose of improving the efficiency of the entire system. .
  • It costs more because it requires more hardware (channel processors). The channel control method is usually used in large-scale data interaction situations.

The difference between channel control mode and DMA control mode : First,In the DMA control mode, the CPU is required to control the size of the data block to be transmitted and the memory to be transferred, while in the channel control mode, these information are controlled and managed by the channel; secondly, a DMA controller corresponds to a device and transfers data to the memory. One channel can control data exchange between multiple devices and memory.

5.1.3 I/O software hierarchy

The basic idea of ​​I/O software design is to organize device management software into a hierarchical structure. The low-level software is related to the hardware and is used to shield the specific details of the hardware; while the high-level software provides users with a friendly, clear and unified interface .I/O device management software is generally divided into 4 layers: interrupt handler, device driver, device independence software and user layer software

Hierarchy overview

Insert image description here

When a user program wants to read a block of data from a file, it needs to do so through the operating system. The device independence software first searches for this data block in the cache. If it is not found, it calls the device driver to issue a corresponding request to the hardware, and the user process blocks until the data block is read. When the disk operation is completed, the hardware generates an interrupt and transfers to the interrupt handler. The interrupt handler checks the cause of the interrupt, obtains the required information from the device, and then wakes up the sleeping process to end the I/O request, allowing the user process to continue execution.

Interrupt handler

Interrupt handling is the primary way to control input/output devices and data transfer between memory and the CPU . Interrupts are hardware-dependent, and the code in the I/O device's interrupt service routine is independent of any process. When the I/O operation is completed, the device sends an interrupt signal to the CPU, and the CPU responds to the interrupt and then switches to the interrupt handler.

The interruption process is as follows:

  • Wake up a blocked driver process
  • Protect the CPU environment of the interrupted process
  • Analyze the cause of the interruption
  • Perform interrupt handling
  • Restore the scene of an interrupted process

device driver

All device-related code is placed in the device driver. Since the device driver is closely related to the device, a driver should be configured for each type of device.

The task of the device driver is to accept abstract requests from the upper-layer device independence software, convert these requests into specific commands that the device controller can accept, and then send these commands to the device controller and supervise the correct execution of these commands .If the device driver is idle when the request comes, it will immediately start executing the request. If the device driver is executing a request, the new request will be inserted into the waiting queue.. The device driver is the only program in the operating system that knows how many registers are set in the device controller and what they are used for .

Device driver processing:

  • Convert abstract requirements into concrete requirements
  • Check the validity of I/O requests
  • Read and check device status
  • Pass necessary parameters
  • Set up work
  • Start I/O device

device independence software

Although some parts of the I/O software (such as device drivers) are device-dependent, most software is device-independent.As for the boundaries between device drivers and device-independent software, they vary with different operating systems. The specific division principle depends on how the system designer weighs the independence of the system and the device, the operating efficiency of the device driver, and many other factors.. For some functions that are implemented in a device-independent manner, they can also be implemented by device drivers for efficiency and other reasons.

The basic task of device independence software is to implement the I/O functions required by general devices and provide a unified interface to user space software . The functions that device independence software should usually implement include unified interfaces for device drivers, device naming, device protection, providing device-independent logical blocks, block allocation of buffers and storage devices, allocation and release of exclusive devices, and error handling.

User level software

Generally speaking, most I/O software is included in the operating system, but some are still composed of library functions linked with user programs, or even programs that run outside the kernel . Common system calls include I/O system calls, which are implemented by library functions. The SPOOLing system is also at this level.

5.2 I/O core subsystem

The I/O core subsystem is various methods of device control. The services it provides mainly include I/O scheduling, cache and buffer device allocation and recycling, spooling technology, etc.

5.2.1 I/O scheduling concept

I/O scheduling is to determine a good order to execute I/O requests.. The order of system calls issued by an application is not always the best choice, so I/O scheduling is needed to improve the overall performance of the system, enable fair sharing of device access among processes, and reduce the average waiting time required for I/O completion. .

The operating system implements scheduling by maintaining a request queue for each device. When an application performs a blocking I/O system call, the request is added to the corresponding device's queue. I/O scheduling reorders queues to improve overall system efficiency and average application response time.

Methods by which the I/O subsystem improves computer efficiency include I/O scheduling and the use of storage space technologies on main memory or disk, such as buffering, caching, and spooling.

5.2.2 Cache and Buffer

Another technology that improves the degree of parallelism between processors and peripherals is buffering technology.

Introduction of cache

Although interrupt, DMA and channel control technologies allow devices and devices in the system, and devices and CPUs, to run in parallel, the problem of mismatch in processing speed between devices and CPUs exists objectively, and this problem restricts the further improvement of computer system performance.

The introduction of the buffer alleviates the contradiction between the speed of the CPU and the device, improves the degree of parallel operation of the device and the CPU, and improves the system throughput and device utilization. In addition, the introduction of buffering can reduce the frequency of device interrupts to the CPU and relax restrictions on interrupt response time .

There are two ways to implement caching:

  • It is implemented using a hardware buffer, but due to its high cost, it is generally not used except for some key parts.
  • A storage area is allocated in the memory specifically for temporarily storing input and output data. This area is called a buffer.

Cache classification

The number of buffers set according to the system, Buffering technology can be divided into single buffering, double buffering, circular buffering and buffer pool .

  • single buffer

    Insert image description here

    Single buffering is the simplest form of buffering provided by the operating system.When a user process issues an I/O request, the operating system allocates a buffer for it in memory. Since only one buffer is set up, when the device and processor exchange data, the data to be exchanged should be written into the buffer first, and then the device or processor that needs the data should take the data from the buffer . Buffer operations are serial

  • double buffering

    Insert image description here

    The introduction of double buffering can increase the degree of parallel operation of processing and preparation. When inputting to a block device, the input device first fills the first buffer with data. While the input device fills the second buffer, the operating system can transfer the data in the first buffer to the user area for the processor to process. Calculation; when the data in the first buffer is processed and the second buffer is full, the processor can process the data in the second buffer, and the input device can fill the first buffer. Obviously, the use of double buffering increases the degree to which the processor and input devices can operate in parallel. Only when both buffers are empty and the process needs to extract data, the process blocks.

  • circular buffer

    The double buffering solution can achieve better results when the input/output speed of the device basically matches the data processing speed of the processor. However, if the two speeds are far different, the effect of double buffering is not ideal. Circular buffering technology was introduced for this purpose.

    Insert image description here

    A circular buffer contains multiple buffers of equal size . Each buffer has a link pointer pointing to the next buffer, and the pointer of the last buffer points to the first buffer, so that multiple buffers form a ring . When the circular buffer is used for input/output, it also needs two pointers, im and out. For input, data must first be received from the device into the buffer, and the in pointer points to the first empty buffer into which data can be input; when the user process needs data, a buffer filled with data is taken out from the circular buffer. To extract data, the out pointer points to the first full buffer from which data can be extracted. Obviously, the opposite is true for output. The process sends the processed data that needs to be output to the empty buffer, and when the device is idle, the data is taken out of the full buffer and output by the device.

  • buffer pool

    Circular buffering is generally applicable to specific I/0 processes and computing processes. Therefore, when there are many processes in the system, there will be many such buffers, which not only consumes a large amount of memory space, but also has low utilization rate. At present, buffer pools are widely used in computer systems. The buffer pool is composed of multiple buffers. The buffers can be shared by multiple processes and can be used for both input and output.

    Insert image description here

    The buffers in the buffer pool can form the following three queues according to their usage:

    • empty buffer queue
    • A buffer queue filled with input data (input queue)
    • A buffer queue filled with output data (output queue)

    4 types of work buffers:

    • Working buffer for holding input data
    • Work buffer used to extract input data
    • Working buffer for holding output data
    • Work buffer used to extract output data

    When the input process needs to input data, it takes an empty buffer from the head of the empty buffer queue, uses it as a working buffer to accommodate the input data, then inputs the data into it, and then hangs it to the input queue when it is full. End of the line. When the calculation process needs to input data, it obtains a buffer from the input queue as a working buffer for extracting input data. The calculation process extracts data from it. After the data is used up, it is hung to the end of the empty buffer queue. When the computing process needs to output data, it obtains an empty buffer from the head of the empty buffer queue as a working buffer to accommodate the output data. When it is filled with output data, it is hung to the end of the output queue. When output is required, the output process obtains a buffer filled with output data from the output queue as a working buffer for extracting the output data. After the data is extracted, it is hung to the end of the empty buffer queue.

    Caches and buffers

    Cache is high-speed memory that can hold copies of data. Accessing the cache is more efficient and faster than accessing the original data . Although caches and buffers both sit between a high-speed device and a low-speed device,Cache is not equivalent to buffer, there is a big difference between them .

  • The data stored in the two is different. What is placed in the cache is a backup of some data on the low-speed device. In other words, the data on the cache must be on the low-speed device ; andThe buffer contains the data passed from the low-speed device to the high-speed device. The data is passed from the low-speed device to the buffer, and then sent from the buffer to the high-speed device. However, there may not be a backup in the low-speed device.

  • The purpose of the two is different. The cache is introduced to store backups of frequently accessed data on low-speed devices . In this way, high-speed devices do not need to access low-speed devices every time. However, if the data to be accessed is not in the cache, then the high-speed device still Requires access to low-speed devices ; whileThe buffer is to alleviate the speed mismatch between high-speed devices and low-speed devices. Every communication between high-speed devices and low-speed devices must pass through the buffer. High-speed devices will not directly access low-speed devices.

5.2.3 Equipment allocation and recycling

Device allocation is one of the functions of device management. When a process makes an I/O request to the system, the device allocation program will allocate the required devices according to a certain allocation strategy, and also allocate the corresponding device controllers and channels. Ensure communication between CPU and device.

Data structures in device management

In order to achieve management and control of I/O devices, it is necessary to record the relevant information of each device, channel, and device controller.The main data structures based on device allocation are device control table (DCT), device controller control table (COCT), channel control table (CHCT) and system device table (SDT). Not only does the device need a control table, but the controller also needs a control table, and the channels that control the controller also need a control table. At the same time, the equipment as the final resource also needs a table, which is the system device table .

Insert image description here

  • DCT. The system configures a device control table for each device to record the characteristics of the device and the connection status of the I/O controller . Among them, the device status is used to indicate the current status of the device (busy/idle), the device waiting queue pointer points to the waiting queue composed of processes waiting to use the device, and the controller control table (COCT) pointer points to the device controller connected to the device. .
  • COCT. Each controller is equipped with such a controller control table, which is used to reflect the usage status of the device controller and the connection status with the channel .
  • CHCT. Each channel is also equipped with such a channel control table, which is used to reflect the status of the channel, etc.
  • SDT. There is only one system device table in the entire system, which records the status of all physical devices connected to the system. Each physical device occupies one entry . Each entry in SDT includes information such as device type, device identifier, device control table pointer, etc. Among them, the device control table (DCT) pointer points to the device control table corresponding to the device.

Device allocation strategy

In a computer system, the number of processes requesting services from a device is often greater than the number of devices. This leads to the problem of competition between multiple processes for certain types of devices. In order to ensure that the system works in an orderly manner, the system should consider the following issues when allocating equipment.

  • Nature of use of the equipment

    • Exclusive equipment. Also known as exclusive devices, the exclusive allocation method should be used, that is, after a device is assigned to a process, it will be exclusively occupied by it until the process completes or releases the device, and then the system can allocate the device to other processes. In fact, most low-speed devices are suitable for this allocation; its main disadvantage is that the I/O devices are often underutilized.
    • Shared allocation. For shared devices, the system can allocate them to multiple processes at the same time. The shared allocation method significantly improves device utilization, but access to the device needs to be reasonably scheduled.
    • Virtual allocation. Virtual allocation is for virtual devices. When a process applies for an exclusive device, the system allocates it a part of the storage space on the shared device. When the process wants to exchange information with the device, the system stores the information to be exchanged in this part. in storage space; when appropriate, transfer information on the device to storage space or transfer information from storage space to the device.
  • Device Allocation Algorithm

    • First come, first served. The queue is formed according to the time order of requests, and the device is always allocated to the head process first.
    • The one with the highest priority takes precedence. Devices are allocated according to their priority. If the priorities are the same, they are allocated according to the first-come, first-served algorithm.
  • Device allocation security

    The so-called security of device allocation means that process deadlock should not occur during device allocation.

    When allocating equipment, static allocation and dynamic allocation can be used. Static allocation means that before the user job starts executing, the system allocates all the equipment, device controllers and channels required for the job at one time. Once allocated, they will be occupied until the job is cancelled. Although static allocation will not cause deadlock, the device usage efficiency is low . Dynamic allocation refers to the allocation of equipment according to execution needs during process execution. Apply for the device when the process needs it and release it immediately after use. Dynamic allocation can help improve device utilization, but if the allocation algorithm is improper, it may cause deadlock .

    The dynamic allocation method of equipment is divided into safe allocation and unsafe allocation.

    • In the safe allocation method, whenever the process issues an I/O request, it enters the blocking state and is not awakened until the I/O is completed. This allocation method abandons the " request and hold conditions ",No deadlock occurs, but the process advances slowly
    • In the unsafe allocation mode, a process is allowed to run after issuing an I/O request, and can continue to issue I/O requests. Therefore, a process may operate multiple devices at the same time, thusMake the process move forward quickly, but death may occurlock, so security checks need to be performed before assigning the device.

device independence

Device independence isRefers to the fact that the application is independent of the specific physical device used, which can improve the flexibility of device allocation and device utilization.. In order to improve the adaptability and scalability of operating systems, modern operating systems without exception achieve device independence (also known as device independence).

In order to achieve device independence, the two concepts of logical devices and physical devices are introduced, and a logical device table (LUT) needs to be set up in the system, in which each table entry contains the logical device name, physical device name and device driver. Entry address . In the application, the logical device name is used to request the use of a certain type of device, and the logical device allocated by the system for this process corresponds to a physical device and the device driver entry address. This information is placed in an item of the logical device table. In the future, when the process requests an I/O operation through the logical device name, it can find the corresponding physical device and driver entry address.

The benefits of device independence include: flexibility in device allocation and ease of I/O redirection

To achieve device independence,A layer of device independence software must be set up above the device driver to perform common operations for all I/O devices and provide a unified interface to user layer software.mouth. The key is that a logical device table must be set up in the system to map logical devices to physical devices. Each entry contains three items: logical device name, physical device name and device driver entry address; when the application uses When a logical device name requests an I/O device, the system must allocate the corresponding physical device to it and create an entry in the LUT. When the process uses the logical device name to request an I/O operation, it can obtain the physical device from the LUT. Device name and driver entry address.

The operating system implements device independence by setting up device independence software, configuring logical device tables, and mapping logical devices to physical devices .

equipment allocation program

  • Device allocation for single-channel I/O systems

    When a process makes an I/O request, the system's device allocation program can allocate the device according to the following steps:

    • Assign equipment
    • Assign device controller
    • Assign channel

    During allocation, if the corresponding device is busy, the process will be inserted into the corresponding waiting queue.

  • Device allocation for multi-channel I/O systems

    In order to improve system flexibility, a multi-channel I/O system structure is usually adopted, that is, a device is connected to multiple device controllers, and the device controller is also connected to multiple channels. When a process makes an I/O request, the system can choose to allocate any device of this type to the process . The steps are as follows:

    • According to the device type, retrieve the system device control table, find the first idle device, and detect the security of the allocation. If it is safe, allocate it; otherwise, insert the device into the waiting queue of this type of device.
    • After the device is allocated, the device controller control table is retrieved to find the first idle device controller connected to the allocated device. If there is no idle device, return to step 1) to find the next idle device.
    • After the device controller is assigned, it also searches for the channels connected to it and finds the first idle channel. If there is no idle channel, return to step 2) to find the next idle device controller. If there is an idle channel, the device allocation is successful. The corresponding device, device controller and channel are assigned to the process, and the I/O device is started to start information transmission.

Equipment recycling

When the process has finished using the corresponding I/O device, it releases the occupied device, device controller and channel, and the system recycles it and modifies the corresponding data structure for use in the next allocation.

5.2.4 Spooling Technology

The number of exclusive devices in the system is limited and often cannot meet the needs of multiple processes in the system, thus becoming the "bottleneck" of the system and causing many processes to be blocked due to waiting. In addition, processes assigned to exclusive devices often occupy but infrequently use the device during the entire running period, resulting in low device utilization. To overcome this shortcoming, peopleUse shared devices to virtualize exclusive devices and transform exclusive devices into shared devices, thereby improving device utilization and system efficiency. This technology is called SPOOLing technology.

SPOOLing means Simultaneous Peripheral Operating On-Line, also known as spool input/output operation. SPOOLing technology is actually a peripheral simultaneous online operation technology, also known as queued dump technology. SPOOLing the system is different from the offline method.

Insert image description here

SPOOLing technology is a technology for exchanging low-speed input/output devices with hosts. Its core idea is to obtain offline effects in an online manner. Low-speed devices are connected to high-speed devices via channels and buffer memory located in the host memory. The high-speed device is usually auxiliary memory. In order to store the information input from the low-speed device, a buffer is formed in the memory, and an output well and an input well are formed on the high-speed device. During transmission, the information is transferred from the low-speed device to the buffer, and then to the input well of the high-speed device, and then from The output of the high-speed device is passed to the buffer and finally to the low-speed device .

The composition of the SPOOLing system

  • Input wells and output wells. The input well and the output well are two storage areas opened up on the disk. The input well simulates a disk during offline input and is used to store data input by I/O devices. Output and simulate the disk during offline output, which is used to store the output data of the user program.

  • Input buffer and output buffer. Input buffer and output buffer are two buffers opened in memory. The input buffer is used to temporarily store data passed by the input device and then transfer it to the input well. The output buffer is used to temporarily store data passed from the output well before transferring it to the output device.

  • input process and output process. The input process simulates the peripheral control machine during offline input, passing the data required by the user from the input device through the input buffer to the input well. When input data is required, the CPU reads the data directly from the input well into memory. The output process simulates the peripheral control machine during offline output, and sends the data required by the user to be output from the memory to the output well. When the output device is idle, the data in the output well is then sent to the output device through the output buffer.

Transforming a dedicated printer into a printer that can be shared by multiple users is a typical example of applying SPOOLing technology. The specific method is: for the user's print output, the system does not actually allocate the printer to the user process. Instead, it first applies for a free disk area in the output well and sends the data to be printed into it; then it applies and fills in the process for the user. Request a print table and hang the table on the print request queue. If the printer is idle, the output program takes the table from the head of the requesting print queue, transfers the data to be printed from the output well to the memory buffer, and then prints until the print queue is empty .

Features of SPOOLing Technology

  • Improved I/O speed. From operating on low I/O devices to operating on input wells or output wells, just like offline operations, the I/O speed is improved and the contradiction between the speed mismatch between the CPU and low-speed I/O devices is alleviated.
  • The device is not assigned to any process. In the input well or the output well, a storage area is allocated to the process and a table of I/O requests is established.
  • Implemented virtual device functionality. Multiple processes use an exclusive device at the same time, and each process considers it to be exclusive to the device, thereby realizing virtual allocation of the device. However, the device is a logical device.
  • In addition to being a speed matching technology, SPOOLing is also a virtual device technology. It uses one type of physical device to simulate another type of physical device, so that each job only uses the virtual device during execution instead of directly using the physical exclusive device. This technology can turn exclusive equipment into shareable equipment, so that equipment utilization and system efficiency can be improved.

Guess you like

Origin blog.csdn.net/pipihan21/article/details/129809470