Operating system final review structure

Table of contents

Preface

Introduction to operating systems

Operating system goals

Basic characteristics of operating systems

Main functions of the operating system

Basic concepts of system calls

Process description and control

The difference between process and program

Why can't programs be executed concurrently (the reason for introducing processes)

Basic states and transitions of processes

Types of process communication

The concept of threads and the difference from processes and the reasons for introducing threads

Processing and Scheduling and Deadlock

Processor Scheduling Hierarchy

The definition, necessary conditions and treatment methods of deadlock

Process synchronization

memory management

Cache and disk cache

Program access and links

Program link

Contiguous allocation storage management

The difference between paging and segmentation

virtual memory

Principles of Virtual Storage Implementation

Multiprogramming and "jitter"

How to prevent "jitter"

input/output system

Introduction to interrupts

Basic concepts of device-independent software

The reason for the introduction of buffering

File management

File operations

File Directory

File Sharing

Disk storage management

External storage organization and other points

What are the ways and characteristics of improving the disk I/O speed


Preface

        This article is the structure of the general review of the operating system course. I hope it can provide some help to students or professionals who are conducting final review or are interested in operating systems.

  • Introduction to operating systems

    • Operating system goals

      • Convenience: Provides various commands to control the computer, which greatly facilitates users

      • Effectiveness: Improve OS resource utilization and improve system throughput

      • Scalability: The operating system must have good scalability to facilitate the addition of new functions and modules

      • Openness: Hardware and software developed in compliance with international standards are compatible with each other and facilitate interconnection.

    • Basic characteristics of operating systems

      • Concurrency

      • shared

      • virtual

      • asynchronous

    • Main functions of the operating system

      • Processor management functions

      • Memory management functions

      • Device management function

      • File management function

      • Interface management function

      • What's New in Modern Operating Systems

        • Guarantee system security

        • Support users to obtain services through the Internet

        • Can handle multimedia information

    • Basic concepts of system calls

      • System calls are the only interface the operating system gives to programmers. Programmers use system calls to dynamically request and release resources in the source program, and call existing system functions in the system to complete hardware-related work and control the execution speed of the program. Therefore, the system call is like a "black box" that shields the user from the specific operations of the operating system and only provides relevant functions.

  • Process description and control

    • The difference between process and program

      • Status: The program is static and becomes a dynamic process only when executed; while the process is dynamic, indicating the execution process of the program

      • Storage location: Programs are usually stored in non-volatile storage media; processes are stored in volatile storage media

      • Independence: Processes are independent, that is, each process has an independent address space and its own resources; programs do not have this independence.

      • Concurrency: Programs cannot be executed concurrently and can only be executed sequentially; processes can be executed concurrently.

      • Life cycle: The life cycle of a program is permanent unless it is explicitly deleted; while the life cycle of a process is temporary and only exists while the program is running.

      • System resources: The process is the basic unit for the operating system to allocate resources, including CPU time, memory space, etc.; programs do not need these resources

    • Why can't programs be executed concurrently (the reason for introducing processes)

      • The program itself is just a static set of instructions that can only be executed sequentially. The instructions of each program are fixed and the order cannot be changed, so they cannot be executed concurrently. Moreover, the program does not have its own state and context information, cannot remember which instruction was last executed, and cannot resume execution after an interruption. The process is to realize the concurrent execution of multiple programs, realize the concurrent execution of multi-tasks, and improve the efficiency and responsiveness of the system.

    • Basic states and transitions of processes

      • Ready state: refers to a process that is ready for execution. That is, after the process has been allocated all necessary resources except the CPU, it can be executed immediately as long as the CPU is obtained again.

      • Execution state: refers to the state in which the program is "executing" after the process obtains the CPU.

      • Blocked state: The executing process cannot continue to execute due to a certain time, that is, the execution of the process is blocked.

    • Types of process communication

      • shared memory system

      • pipe communication system

      • messaging system

      • client-server system

    • The concept of threads and the difference from processes and the reasons for introducing threads

      • Resource overhead

      • memory space

      • communication

      • Sphere of influence

      • The reason for introducing threads is to reduce the overhead of context switching, improve the concurrency performance of multi-processor systems, and simplify data sharing and communication between multi-tasks

  • Processing and Scheduling and Deadlock

    • Processor Scheduling Hierarchy

      • Advanced scheduling (long-range scheduling, job scheduling)

        • Object: Job function: According to a certain algorithm, decide to transfer several jobs in the back queue on the external memory into the memory, create processes for them, allocate necessary resources, and put them into the ready queue.

      • Low-level scheduling (short-range scheduling, process scheduling)

        • Object: Process Function: According to a certain algorithm, decide which process in the ready queue gets the processor, and the allocator allocates the processor to the selected process

      • Intermediate scheduling (memory scheduling)

        • The main purpose is to improve memory utilization loop and system throughput

    • The definition, necessary conditions and treatment methods of deadlock

      • definition

        • A group of processes is deadlocked if each process in the group is waiting for an event to occur that can only be caused by other processes in the group.

      • necessary conditions

        • Mutually exclusive conditions

          • The process uses the allocated resource exclusively, that is, within a period of time, a resource can only be occupied by one process.

        • Request and hold conditions

          • The process already occupies at least one resource, but makes a new resource request, and the requested resource is already occupied by another process. At this time, the requesting process is blocked and retains the resource it occupies.

        • No preemption conditions

          • The resources obtained by the process cannot be preempted before they are used up, and can only be released when the process completes its use.

        • Loop wait condition

          • It means that when a deadlock occurs, there must be a "process-resource" circular chain.

      • Approach

        • prevention

          • Adopt a protocol to prevent or avoid deadlocks and ensure that the system never enters a deadlock state

        • Detection

          • Allows the system to enter a deadlock state, but detects it and recovers

        • neglect

          • Ignore this problem completely and assume that the system will never deadlock

  • Process synchronization

  • This part mainly includes programming content such as PV operations. Due to the format of this article, the code will not be uploaded, so please check it yourself.
  • memory management

    • Cache and disk cache

      • Cache: It is an important component in the modern computer structure. It is a memory between registers and memory. It is mainly used to back up the more commonly used data in the memory to reduce the number of accesses to the processor heap memory and can greatly improve the program speed. Execution speed

      • Disk cache: Since the current disk I/O speed is much lower than the memory access speed, in order to alleviate the speed mismatch between the two, a disk cache is specially set up, which is mainly used to temporarily store a part of frequently used disk data. Reduce the number of disk accesses

    • Program access and links

      • Program loading

        • Absolute loading method: When the computer system is very small and can only run a single program, it is entirely possible to know where the program will reside in the memory. In this case, the absolute loading method is used

        • Relocatable loading mode: The target module can only be loaded into a pre-specified location in memory. This only applies to single-programming environments.

        • Dynamic running transfer method: After loading the load module into the memory, the relative address in the module will not be converted into an absolute address immediately, but this address will be postponed until it is actually executed.

      • Program link

        • static link

          • Before running the program, first link each target module and the library functions they require into a complete assembly module, which will not be disassembled in the future.

        • Dynamic linking on load

          • A set of target modules obtained after compiling the user source program are loaded into the memory in this way.

        • Runtime dynamic linking

          • During the execution process, when it is discovered that a "called module" has not been loaded into the memory, the OS immediately finds the module, loads it into the memory, and links it to the loaded module.

    • Contiguous allocation storage management

      • single read assignment

      • Fixed partition allocation

      • Dynamic partition allocation

      • Dynamic Relocation Partition Allocation

    • The difference between paging and segmentation

      • A page is a physical unit of information

      • Page size is fixed and determined by the system

      • The paged user program address space is one-dimensional

  • virtual memory

    • Principles of Virtual Storage Implementation

      • Pagination

        • Paging is to divide physical memory into fixed-size pages. Similarly, virtual memory is also divided into fixed-size pages, and each page corresponds to a memory address.

      • Page change

        • A so-called page fault occurs when a program needs to access a page that is not in physical memory. At this time, the operating system will select a page in physical memory, write its contents back to disk, then read the required page from disk to physical memory, and finally update the page table to reflect the new virtual page to physical page mapping.

    • Multiprogramming and "jitter"

      • Multiprogramming and Processor Utilization

        • Since the virtual memory system can logically expand the memory, it only needs to load part of the program and data of a process, and the process can start running. It is hoped that more processes can be run in the system, that is, multi-programming is increased to improve Processor Utilization

      • Causes of jitter

        • At the same time, there are too many programs running in the system, resulting in too few physical blocks allocated to each process, which cannot meet the basic requirements for the normal operation of the process. As a result, each process will frequently have page faults when running, and the system must be requested to Page fault loaded into memory

    • How to prevent "jitter"

      • Adopt local replacement strategy

      • Integrating working set algorithms into processor scheduling

      • Use the "L=S" criterion to adjust the page fault rate

      • Select paused process

  • input/output system

    • Introduction to interrupts

      • Interrupt refers to the CPU's response to the interrupt signal sent by the I/O device. The CPU suspends the executing program, maintains the CPU on-site environment, and automatically transfers to execute the interrupt handler of the I/O device. After execution, return to the breakpoint and continue executing the original program.

    • Basic concepts of device-independent software

      • Using a device by its physical device name

      • Introduce logical device name

      • Implement conversion from logical device to physical device name

    • The reason for the introduction of buffering

      • Conflict between buffering CPU and I/O device speed mismatch

      • Reduce the frequency of CPU interrupts and relax restrictions on CPU interrupt response time

      • Solve the problem of data granularity mismatch

      • Improve parallelism between CPU and I/O devices

  • File management

    • File operations

      • Open: Establish a connection between the user and the specified file

      • Close: Disconnect this connection

    • File Directory

      • File control block and inode

      • simple file directory

      • tree directory

      • Acyclic graph directory

      • Directory query technology

    • File Sharing

      • File Sharing Using Directed Acyclic Graph

      • File Sharing Using Symbolic Links

  • Disk storage management

    • External storage organization and other points

      • Continuous storage method: easy sequential access, fast sequential access

      • Link organization method: eliminates external fragmentation and improves the utilization of external memory; it is very easy to insert, delete and modify records; it can adapt to the dynamic growth of files

      • Index organization: supports direct access; does not generate external fragmentation

    • What are the ways and characteristics of improving the disk I/O speed

      • way

        • Data delivery method

          • data delivery

          • pointer delivery

        • replacement algorithm

        • Periodically write back to disk

      • features

        • There are several ways to improve disk I/O speed, and they each have different characteristics: 1. Use high-performance disk drives: Choose disk drives with faster speeds and shorter response times, such as solid-state drives (SSD). SSD has faster read and write speeds and lower access latency than traditional mechanical hard disks (HDD). 2. Implement disk-level RAID: RAID (Redundant Array of Disks) is a technology that combines multiple disk drives to improve disk read and write performance and data redundancy through data striping, fault tolerance and other technologies. Different RAID levels provide different features, such as RAID 0 providing higher performance, while RAID 1 provides better data redundancy. 3. Increase disk cache: Using high-performance disk cache can reduce the number of direct accesses to the physical disk, thereby increasing disk I/O speed. Common disk caching technologies include hardware cache (such as the cache in a disk array controller) and operating system level cache (such as the file system cache of the operating system kernel). 4. Optimize the file system: Selecting an appropriate file system and performing appropriate optimization configuration can improve disk I/O performance. For example, in some cases, using a smaller cluster size can improve read and write performance for small files, or using a journaled file system can reduce file system write latency. 5. Distributed storage systems: By allocating and replicating data among multiple physical and logical nodes, distributed storage systems can provide higher disk I/O performance and fault tolerance. Such systems can improve overall performance through parallel processing and load balancing, and storage capacity and performance can be expanded by adding more nodes. An appropriate method needs to be selected based on specific application scenarios and requirements. Sometimes it may be necessary to combine multiple optimization measures to improve disk I/O performance.

Guess you like

Origin blog.csdn.net/as12138/article/details/132006393