6/16 2023 Little knowledge about operating systems

As the exam is approaching, I will summarize the knowledge points and read them by myself. I might as well send them to CSDN.
(Based on the textbook, the answers here were searched online. I also read the textbook, so you can just look at the table of contents)

Points

Single choice 10 * 1
fill in the blank 10 * 2
short answer 4 * 5
application 5 * 10

Chapter two

1. Conditions for process status transformation

The process will experience different states under different circumstances. The conditions for process state transformation are as follows:

  1. Create state (Create): The process is created, but has not yet been scheduled for execution.
  2. Ready state (Ready): The process has been scheduled and is waiting for the system to allocate CPU resources for execution.
  3. Running status (Running): The process has obtained CPU resources and is executing its instructions.
  4. Blocked state (Blocked): The process suspends execution because it is waiting for certain events to occur, such as waiting for I/O operations to complete, waiting for external signals, etc.
  5. Terminated state: The process completes execution or is forcibly terminated and enters the terminated state. \

The transition conditions between process states are as follows:

  1. Transition from creation state to ready state: The process is created and accepted by the system scheduler, waiting for the system to allocate CPU resources for execution.
  2. Transition from ready state to running state: The system scheduler allocates CPU resources to the process and enters the running state.
  3. Transition from running state to ready state: The process is preempted of CPU resources due to time slice exhaustion or blocking and enters the ready state.
  4. Transition from running state to blocking state: The process is blocked because it is waiting for certain events to occur, such as I/O operations.
  5. Transition from blocking state to ready state: The event that the process is waiting for occurs and enters the ready state.
  6. Transition from running state to termination state: The process completes execution or is forcibly terminated and enters the termination state.

2. The concept, function and composition of process control block.

Process Control Block (PCB) is an important concept in the operating system. It is a data structure used to manage processes and is also one of the cores of process management in the operating system. Each process has a corresponding PCB, which is used to record the status, resource information, scheduling information, etc. of the process.


The role of process control block:

  1. Record the status of the process: The current status of the process is stored in the process control block, including four statuses: ready, running, blocked and terminated.
  2. Store the context information of the process: When a process is preempted or blocked, the running status of the process and the value in the CPU register need to be saved. This information is stored in the process control block so that its running state can be restored when the process is called again.
  3. Store resource information of the process: The process control block also records the system resource information occupied by the process, such as open files, occupied I/O devices, etc.
  4. Stores the scheduling information of the process: The process control block also contains information such as the priority and scheduling algorithm of the process, which is used for the system scheduler to schedule the process.
    The composition of the process control block:
    The process control block is a data structure. The specific composition may vary depending on different operating systems or implementations, but it usually includes the following information:
  5. Process Identifier (PID): Each process has a unique identifier that is used by the system to identify and manage the process.
  6. Process status: Record the current status of the process, including four statuses: ready, running, blocked and terminated.
  7. Program counter (PC): records the address of the next instruction that needs to be executed and is used to restore the running status of the process.
  8. CPU register information: Record the values ​​in the CPU registers of the process when it is running so that its running state can be restored when the process is called again.
  9. Process priority and scheduling information: records the priority, scheduling algorithm and other information of the process, which is used for the system scheduler to schedule the process.
  10. Process resource information: Record the system resource information occupied by the process, such as open files, occupied I/O devices, etc.
  11. Process memory management information: records the memory space information occupied by the process, including the process's address space, page table, etc.
  12. Process statistics: record the running time, CPU utilization and other statistical information of the process.

3. Process context concept and composition

Process Context refers to the process running environment state that the operating system needs to save before executing the process. When the operating system needs to switch processes, it needs to save the context of the current process and then restore the context of the process to be executed so that it can continue to run.

Process context usually consists of the following aspects:

  1. CPU registers: including general registers, program counter, stack pointer, etc. These registers store the current running status of the process and are the most important context information of the process.
  2. Memory management information: including the virtual address space of the process, internal data structures, etc. This information needs to be saved and restored when switching processes.
  3. Running status: includes the running status of the process, such as whether the process is blocked, process priority, etc.
  4. File descriptor: The file descriptor opened by the process needs to be saved so that file reading and writing operations can continue after the process switches back.
  5. Signal handler: includes information such as the signal handler of the current process, pending signal queues, etc.
  6. I/O status: includes the I/O status of the current process, waiting I/O operations and other information.

The above are some components of the process context. The specific components may vary depending on different operating systems or implementations. The process of saving and restoring process context is one of the most important links in process switching. The integrity and correctness of the data need to be ensured to avoid affecting the running of the process.

4. Process creation process and steps

The process creation process can be roughly divided into the following steps:

  1. Resource allocation: Before a process is created, necessary resources need to be allocated to it, including memory space, file descriptors, process numbers, etc.
  2. Initializing the context: After allocating resources, you need to initialize the context environment for the process, including setting the initial state of the process, initializing the stack of the process, setting the environment variables of the process, etc.
  3. Loader: After creating a process, the program to be run by the process needs to be loaded into memory, and then the entry address of the program is passed to the operating system.
  4. Running a program: When running a program, the operating system will hand over CPU control to the process, and the process will begin executing the program it loaded until the program is completed or interrupted.
  5. Exit processing: When the process completes execution or is forcibly terminated, it needs to perform exit processing, release the resources occupied by the process, notify the parent process of the process's exit status, etc.

The specific process creation process may vary depending on different operating systems or implementations, but the above steps are the basic process for creating a process.

5. Job scheduling and low-level scheduling algorithms, basic ideas, 7, all must be mastered

Just to name a few

Job scheduling refers to the module in the operating system that is responsible for managing job queues and scheduling running jobs. Its main task is to allocate CPU resources according to certain scheduling policies and conditions such as priority and time slices to achieve reasonable and efficient job scheduling. use.

Common job scheduling algorithms include the following:

  1. First come, first served (FCFS) scheduling algorithm: Jobs are scheduled in the order they arrive, that is, jobs that arrive first are executed first, and jobs that arrive later are queued to wait.
  2. Shortest Job First (SJF) scheduling algorithm: Schedule jobs according to the length of time they need to be executed, that is, jobs with a short execution time are executed first, and jobs with a long execution time are queued to wait.
  3. Priority scheduling algorithm: Schedule according to the priority of the job, that is, jobs with high priority are executed first, and jobs with low priority are queued to wait.
  4. Time slice rotation scheduling algorithm: CPU resources are allocated in turn according to time slices. Each job is assigned a time slice. When the time slice is used up, the job is put back into the queue and waits for the next scheduling.

The low-level scheduling algorithm refers to the scheduling of processes and threads in the operating system, which allocates CPU time slices to each process or thread, and determines which process or thread obtains CPU resources according to a certain scheduling strategy. Common low-level scheduling algorithms include the following:

  1. First come first served (FCFS) scheduling algorithm: Scheduling is performed in the order in which processes or threads arrive, that is, the process or thread that arrives first is executed first, and the process or thread that arrives later is queued to wait.
  2. Shortest Process First (SJP) scheduling algorithm: Schedule processes or threads according to the length of time they need to execute. That is, processes or threads with short execution time are executed first, and processes or threads with long execution time are queued to wait.
  3. Priority scheduling algorithm: Scheduling is performed according to the priority of the process or thread, that is, processes or threads with high priority are executed first, and processes or threads with low priority are queued to wait.
  4. Preemptive scheduling algorithm: During the execution of a process or thread, a higher priority process or thread is allowed to preempt execution rights and the execution of the current process or thread is suspended to ensure that a higher priority process or thread can obtain CPU resources in a timely manner.

6. Thread concept and processing process

A thread is the smallest execution unit scheduled by the operating system and can be understood as a lightweight process. A process can contain multiple threads, each thread has its own stack and registers, but they share resources such as the process's code segment, data segment, and open files.

The thread processing process is as follows:

  1. Thread creation: Thread creation can be achieved through the thread library provided by the operating system or the thread library built into the programming language. Generally speaking, a thread needs to specify a function as the thread entry and pass parameters to the function. When creating a thread, resources such as thread stacks and registers need to be allocated.
  2. Thread scheduling: Thread scheduling means that the operating system allocates CPU time slices to threads and determines when the thread starts executing. Thread scheduling can be based on different scheduling algorithms, such as polling scheduling, priority scheduling, preemptive scheduling, etc.
  3. Thread execution: After the thread is scheduled, it begins to execute its specified function. During the execution, it can access shared resources, such as global variables and files, etc. The execution of a thread can be interrupted by the operating system, such as an IO operation or the time slice running out.
  4. Thread synchronization: Synchronization between threads refers to coordinating the execution sequence and access to shared resources between threads to prevent problems such as race conditions and resource contention. Commonly used synchronization methods include locks, semaphores, condition variables, barriers, etc.
  5. Destruction of threads: Threads can be destroyed when they are completed or no longer needed. The destruction of a thread requires the release of the resources it occupies, such as memory, stack and registers. The destruction of a thread can be initiated by the thread itself or other threads.

In general, the thread processing process includes thread creation, scheduling, execution, synchronization and destruction. The use of threads can improve the concurrency and responsiveness of the program, but you also need to pay attention to issues such as synchronization and race conditions between threads.

7. Why is there the concept of threads?

The concept of threading is to improve the concurrency and responsiveness of computer programs. In a traditional single-threaded program, the program needs to be executed step by step. If an operation needs to wait for IO or other time-consuming operations to be completed before continuing, the execution time of the entire program will be very long and the user experience will be poor. . Using multi-threading technology, the program can perform multiple tasks at the same time, making full use of the computer's multi-core CPU and IO devices and other resources to improve the concurrency and responsiveness of the program.
For example, a web server needs to handle multiple requests at the same time. If a single-threaded method is used, each request needs to wait for the previous request to be processed before it can start execution. This will cause a slow response problem. Using multi-threading technology, the server can handle multiple requests at the same time and improve response speed.
In addition, multi-threading can also improve the reliability and stability of the program. If an exception or crash occurs in a thread, other threads can still continue to execute without affecting the normal operation of the entire program.

Therefore, with the concept of threads, programs can use computer resources more efficiently, improve concurrency and responsiveness, and improve reliability and stability. Moreover, the use of threads has become a standard feature of modern computer programming.

Chapter 3: Deadlock, Communication

1. The concepts of synchronization and mutual exclusion

Synchronization and mutual exclusion are two key concepts in multi-threaded programming.

Synchronization refers to the coordination and cooperation between multiple threads to ensure that they perform the right operations at the right time. For example, in a multi-threaded program, thread A needs to wait for thread B to complete an operation before it can continue execution. At this time, synchronization operation is required to ensure that thread A does not perform related operations before thread B executes.

Mutual exclusion refers to the competition between multiple threads to ensure that they do not access and modify the same shared resource at the same time. For example, in a multi-threaded program, if multiple threads access and modify the same variable at the same time, it may cause data corruption and program crash. At this time, mutual exclusion operations are required to ensure that each thread can safely access and modify shared resources.

Mutual exclusion and synchronization are closely related, because mutual exclusion is often needed in synchronization operations to ensure that threads do not access shared resources at the same time to achieve the correct operation sequence and results.

In multi-thread programming, commonly used synchronization and mutual exclusion mechanisms include lock, semaphore, condition variable, etc. These mechanisms can ensure synchronization and mutual exclusion operations between multiple threads, thereby ensuring the correctness and reliability of the program.

2. The concept of critical resources

In multi-threaded programming, a critical resource refers to a piece of code or data shared by multiple threads, and these threads all need to access or modify this piece of code or data at the same time. Because multiple threads access or modify the same critical resource at the same time, competition will occur, which will cause a series of problems, such as data inconsistency, program crashes, etc. Therefore, in multi-threaded programming, special synchronization and mutual exclusion measures need to be taken for shared critical resources to ensure that threads can safely access and modify these resources.

Critical resources can be code segments that access shared data, or shared data structures, shared devices, shared files, etc. For different critical resources, different synchronization and mutual exclusion measures need to be adopted to ensure safe access between threads.

Common critical resources include shared memory, global variables, files, etc. When accessing these critical resources, some synchronization and mutual exclusion mechanisms need to be used, such as locks, semaphores, condition variables, etc., to ensure that there will be no competition and that multiple threads can correctly access and modify shared resources, thereby ensuring that the program correctness and reliability.

3. Semaphore and pv operation question types

(omitted) You can watch the end

4. The problem of mutually exclusive access to critical sections

Oops, you can read to the end

5. Mailbox communication method

The mailbox communication method refers to realizing inter-process communication through an intermediate mailbox. The specific implementation method is: each process has a private mailbox and a public mailbox. The private mailbox can only be accessed by this process, while the public mailbox can be accessed by multiple processes. A process can send messages to the public mailbox or the private mailbox of other processes, and it can also receive messages from the public mailbox or the private mailbox of other processes.

The advantage of mailbox communication is that it is simple to implement, easy to understand and maintain, and it can realize point-to-point or broadcast communication. It also has good mutual exclusion, because each process has an independent mailbox, and there will not be multiple processes accessing the same resource at the same time.

However, mailbox communication also has some disadvantages. Most obviously, it is relatively inefficient since each process requires message passing through the mailbox and each operation requires a system call. In addition, due to the limited capacity of the mailbox, once the capacity limit is reached, messages may be lost.

6. Banker’s Algorithm

Banker's algorithm is a resource allocation algorithm used to avoid deadlocks. It determines whether to allow a process to continue to request resources by judging whether the current system resource allocation will cause a deadlock.

The Banker's Algorithm is based on the following assumptions:

  1. Each process must apply for all resources it needs before execution and release all resources after execution.
  2. The number of resources in the system is limited, and each resource has a maximum amount.
  3. The system knows the number of resources required by each process and the maximum number of resources.
  4. A process can request resources or release already occupied resources.

The basic idea of ​​the Banker's algorithm is to determine whether there is a possibility of deadlock by simulating the process of resource allocation. Specifically, the banker's algorithm will try to allocate resources before each allocation. If the system does not enter a deadlock state after allocation, the allocation is successful; otherwise, the allocation fails. During the implementation process, the Banker's algorithm uses the concept of a safe sequence, that is, a sequence of processes so that each process can be executed smoothly.

The Banker's Algorithm allows the system to avoid deadlocks when allocating resources, but it is only suitable for relatively simple systems. For large and complex systems, its implementation is more difficult and less efficient.

Chapter 4: Storage Management

1. Storage management

2. Figure 3-14 Figure 6 Page Storage Management Address Translation

slightly

3. Method of using block table to retrieve data in memory

slightly

4. Page replacement algorithm

The page replacement algorithm is an important technology in the memory management of the operating system. Its purpose is to replace certain pages (also called frames) from the memory to make room for new pages when the memory is insufficient. Common page replacement algorithms include the following:

  1. First-in, first-out algorithm (FIFO): Select the page that enters the memory earliest for replacement. This algorithm is simple and easy to understand, but it cannot reflect the frequency of page visits.
  2. Least Recently Used (LRU) algorithm: selects the least recently used page for replacement. This algorithm records the most recent usage time of each page, and selects the page that has not been accessed for the longest time for replacement each time. This algorithm takes into account the frequency of page use, but needs to record the access time of each page, which is more complex to implement.
  3. Least Frequently Used Algorithm (LFU): Selects the least frequently used pages for replacement. This algorithm records the number of times each page has been accessed, and selects the page with the least access for each replacement. This algorithm takes into account the frequency of use of the page, but needs to record the number of visits to each page. For pages that are visited more frequently, the phenomenon of "protection" is prone to occur.
  4. Clock algorithm: select the page whose marked bit is 0 for replacement, and set the marked bit of the current page to 1 for each replacement. The implementation of this algorithm is relatively simple, but it cannot reflect the frequency of use of the page.
  5. Optimal replacement algorithm (OPT): Select pages that will not be accessed for the longest time in the future for replacement. This algorithm is the theoretically optimal replacement algorithm, but in actual implementation it is difficult to determine the future access status of each page, so it is difficult to implement.

Different page replacement algorithms are suitable for different scenarios, and the appropriate algorithm needs to be selected based on different application requirements and system characteristics.

5. Number of page-fault interrupts and page-fault rate

slightly

6. Bitmap memory management method

Bitmap managed memory is a simple and effective memory management method. The basic idea is to use a binary bitmap to represent the usage of the entire memory space, where each bit represents a fixed-size block (usually 1 byte) in the memory. or 1 bit), the corresponding bit in the bitmap is 0 indicating that the block is not occupied, and 1 indicating that the block is occupied.

During specific implementation, the operating system divides the entire memory space into several areas according to fixed-size blocks, and records the usage of each area in a bitmap. When a program needs to allocate memory, the operating system scans the bitmap, finds a contiguous free block, and marks it as occupied. When a program frees memory, the operating system marks the block as unoccupied.

The advantages of bitmap memory management include simple implementation and low overhead. It is suitable for scenarios with small memory space and uncomplicated memory management requirements. However, since it needs to record the usage of each block, for a large-scale memory space, the size of the bitmap will be large, occupying more memory space, and the time complexity of finding free blocks is high. In addition, in a multi-process or multi-thread environment, problems caused by concurrent operations need to be considered. Therefore, bitmap managed memory is generally used in scenarios such as embedded systems and small servers.

Chapter 5: Device Management

1. IO control method

In computer systems, I/O control is an important function of the operating system, responsible for managing input/output devices and transferring data from input/output devices to computer memory or from memory to output devices. I/O control methods include the following:

  1. Program control method: The CPU actively sends control instructions to the I/O device and waits for the instructions to be completed. This method is suitable for situations where I/O operations are short and need to be performed frequently, such as console input and output.
  2. Interrupt control method: The I/O device sends an interrupt signal to the CPU to notify the CPU that I/O operations need to be performed. The CPU suspends the current task and switches to the interrupt service routine. After the processing is completed, it returns to the original program to continue execution. This method is suitable for situations where the I/O operation takes a long time but cannot be directly concurrent.
  3. DMA control method: Use a specialized DMA controller to complete data transmission. The CPU only needs to send I/O instructions and DMA instructions, and then transfers DMA control to the DMA controller. No CPU involvement is required during I/O data transmission. This method is suitable for large batch data transfer, such as disk operations.
  4. Channel control method: Use special I/O channel devices to complete data transmission. The CPU only needs to issue control instructions to the channel device, and the channel device completes the data transmission by itself. This method is suitable for large-scale data transmission and high-speed I/O devices.

2. The role of the device controller

A device controller is a specialized hardware device responsible for controlling the operation of a computer's input/output devices. Its main functions are as follows:

  1. Device control: The device controller is the interface between the input/output device and the computer system. It can receive instructions sent by the computer and control the operation of the device. When a program needs to use an input/output device, it sends control instructions to the device through the device controller. The device controller translates the instructions into a form acceptable to the device and then sends them to the device.
  2. Data transfer: Device controllers can control the transfer of data between computers and input/output devices. When an input/output device needs to transfer data to computer memory, the device controller reads the data from the input/output device and stores it in memory. When the computer needs to transfer data to an input/output device, the device controller reads the data from memory and transfers it to the input/output device.
  3. Equipment status monitoring: The equipment controller can monitor the operating status of the equipment and send corresponding interrupt signals to the computer system so that the computer can respond to changes in the equipment in a timely manner and handle accordingly.
  4. Device error handling: The device controller can detect device errors and take appropriate measures. When an error occurs in the device, the device controller will send an interrupt signal to the computer system so that the system can handle the error in time.

In short, the device controller is a very important component of the computer system. It can coordinate the interaction between the computer and the input/output device, so that the computer can complete various tasks more efficiently.

3. How SPOOLing software works

SPOOLing (Simultaneous Peripheral Operations On-line) software is a technology that caches input/output (I/O) tasks to disk. Here's how it works:

  1. When an I/O task arrives at the computer, it is placed in a pending queue. This task may come from a user or a program.
  2. SPOOLing software periodically checks the pending queue and, if there are tasks that need to be processed, caches them on disk.
  3. Once the task is cached on disk, the computer can continue with other tasks without waiting for this I/O task to complete. This can improve computer system utilization.
  4. Once an I/O task is completed, SPOOLing software reads the cached data from disk and transfers it to the target device.
  5. When all I/O tasks are completed, the SPOOLing software will delete the cached data from the disk to free up disk space.

By using SPOOLing software, computer systems can increase the throughput of the system because it allows I/O tasks to proceed in the background without blocking the execution of other tasks. In addition, SPOOLing software can also improve the reliability of the system because it can cache I/O tasks to the disk so that even if the system crashes, the data will not be lost.

4. Disk movement scheduling algorithm (not all listed here)

The disk movement scheduling algorithm is a technology used in the operating system to manage disk I/O operations. Its main purpose is to optimize disk read and write operations and improve disk usage efficiency and system performance. Common disk movement scheduling algorithms include the following:

  1. First come, first served algorithm (FCFS): Disk requests are scheduled according to their arrival time. The advantage is that it is simple and easy to implement, but the disadvantage is that the disk head may move back and forth on the disk, resulting in a longer disk seek time.
  2. Shortest seek time first algorithm (SSTF): Sort according to the distance between the disk head and the requested track, and select the closest track for scheduling each time to reduce the disk seek time. The advantage is that it can reduce disk seek time, but the disadvantage is that some tracks may not be served for a long time, causing starvation problems.
  3. Scan algorithm (SCAN): Also known as the elevator algorithm, the disk head scans the track in one direction until the edge, and then immediately turns around and scans in the opposite direction. The advantage is that it can avoid tracks that are not serviced for a long time, but the disadvantage is that the disk head may move back and forth on the disk.
  4. Circular scan algorithm (C-SCAN): Similar to the scan algorithm, the disk head starts scanning from one end to the other end, and then immediately returns to the original endpoint. The advantage is that it can avoid tracks that are not served for a long time, and the processing speed of track requests is faster than the SCAN algorithm. The disadvantage is that the disk head may move back and forth at the same endpoint.
  5. Optimal algorithm (OPT): Assuming that all disk request sequences can be predicted, the optimal disk movement scheduling order is selected to minimize the disk seek time. This algorithm can achieve the optimal disk scheduling effect, but in fact it cannot predict all disk request sequences, so it cannot be implemented.

Chapter 6: Document Management

1. The concept, composition and function of file control block

File Control Block (FCB) is a data structure used in the operating system to describe and manage files. Each file corresponds to an FCB, and the FCB stores file-related information, including file name, file type, file size, file creation time, latest modification time, file attributes, etc. FCB can also store the physical address of the file, usage of data blocks, access permissions and other information.

FCB usually consists of several fields (or attributes). Different operating systems and file systems vary, but usually include the following basic attributes:

  1. Filename: The name of a file, used to uniquely identify a file in the file system.
  2. File type: The type of file, such as text files, image files, audio files, etc.
  3. File size: The size of the file in bytes.
  4. Creation time: The time when the file was created.
  5. Last modified time: The time when the file was last modified.
  6. File attributes: Some attributes of the file, such as read-only, hidden, etc.
  7. Physical address: The physical address of the file on the disk, including disk block number and offset.
  8. Data block usage: Record which disk blocks are used by the file.
  9. Access permissions: File access permissions, including read, write, execute, etc.

The main function of FCB is to manage file creation, reading, writing and deletion operations. By operating FCB, you can access and manage files. File systems usually manage files by maintaining FCBs. When a file needs to be operated, the file system will search for the corresponding FCB based on the file name, and then perform the corresponding operation. Therefore, FCB is a very important part of the file system.

2. FAT table

FAT (File Allocation Table) is a table used to manage file storage, usually used in FAT file systems. The FAT file system is an older file system that was widely used in early Windows operating systems and removable storage devices.

The FAT table contains the indexes and file block allocation information of all files and folders in a file system. Each file and folder in the file system has a corresponding FAT entry that records the disk space they occupy. Each entry in the FAT table generally contains two pieces of information: the first is the location of the next block of the file or folder, and the second is whether the block has been used.

The information contained in each entry in the FAT table is very simple, which is one of the reasons why the FAT file system is widely used. However, due to its simplicity and usage restrictions, the efficiency and security of the FAT file system are relatively low, and it has been gradually replaced by more advanced and secure file systems, such as NTFS and exFAT.

In short, the FAT table is a table used to manage file storage and is commonly used in FAT file systems. The FAT table records the indexes and file block allocation information of all files and folders in the file system. The FAT file system has been replaced by more advanced and secure file systems due to its simplicity and usage limitations, relatively low efficiency and security.

other

1. The role of authorization mechanism

Authorization mechanism refers to a mechanism implemented by the operating system or application software to ensure system security. It restricts access and operations to system resources through identity authentication and permission control of users or programs.

The functions of the authorization mechanism mainly include the following aspects:

  1. Improve system security: The authorization mechanism can prevent unauthorized users or programs from accessing and operating system resources by performing identity authentication and permission control on users or programs, thereby improving system security.
  2. Protect user privacy: The authorization mechanism can limit users’ access to and operations on sensitive information and protect user privacy.
  3. Prevent system resources from being abused: The authorization mechanism can limit users' use of system resources and prevent system resources from being maliciously or improperly used, thereby protecting the stability and reliability of the system.
  4. Manage system resources: The authorization mechanism can help administrators manage and monitor system resources to better understand system usage and resource allocation.

In short, the authorization mechanism is an important means to ensure system security and resource utilization. It can effectively prevent unauthorized access and operations, protect user privacy, prevent system resources from being abused, and provide management and monitoring of system resources. support.

2. The concept of authorized instructions

Authorization instructions refer to a type of commands provided by operating systems or applications. By executing these commands, users or programs can request access or operations to system resources, and obtain corresponding authorization after the system performs identity authentication and permission verification on them. to complete the operation.

The functions of authorization instructions usually include the following aspects:

  1. Identity authentication: Authorization instructions can be used to verify the identity of a user or program and determine whether it has permission to access or operating system resources.
  2. Permission control: Authorization instructions can be used to set or adjust the permissions of users or programs and restrict their access to or operations on system resources.
  3. Authorization management: Authorization commands can be used to manage and query the authorization status of system resources so that administrators can understand the usage and authorization allocation of system resources.
  4. Security audit: Authorization instructions can be used to record the access and operation of system resources so that administrators can audit and monitor system security.

Common authorization instructions include: chmod, chown, su, sudo, etc. Among them, chmod is used to set the permissions of a file or directory; chown is used to change the owner and group of a file or directory; su is used to switch user identities; sudo is used to authorize a user or program to execute a specific command.
In short, authorization instructions are an important type of commands provided by the operating system or applications. By executing these commands, users or programs can request access or operations to system resources, and after the system authenticates their identity and permissions, they obtain the corresponding authorization to complete the operation.

3. Trusted computer concept

A trusted computer refers to a computer system that has undergone security verification and authorization and can guarantee the integrity, confidentiality and availability of its hardware, software and operating system. It ensures the security and confidentiality of computer systems and prevents malicious attacks and unauthorized access by using special hardware and software mechanisms.

Trusted computers usually include the following aspects:

  1. Trusted Platform Module (TPM): The TPM is a specialized security chip used to store and protect a computer system’s security keys and other sensitive information to ensure system integrity and authentication. TPM can also provide secure boot and authorization functions to ensure system security.
  2. Secure Boot: The secure boot mechanism can detect and verify the integrity and authentication of software and drivers during system startup to prevent malware and drivers from running. Secure boot usually needs to be used in conjunction with TPM to ensure system security.
  3. Virtualization technology: Virtualization technology can divide a physical computer into multiple virtual computers, and each virtual computer can run different operating systems and applications. Virtualization technology can improve resource utilization while also isolating different applications to prevent malicious attacks and data leaks.
  4. Secure Storage: Secure storage protects sensitive data and keys in computer systems from unauthorized access and tampering. Secure storage usually requires special hardware and software to ensure system security.

In short, a trusted computer refers to a computer system that has undergone security verification and authorization. It uses special hardware and software mechanisms to ensure the security and confidentiality of the computer system and prevent malicious attacks and unauthorized access. Trusted computers usually include trusted platform modules, secure boot, virtualization technology and secure storage.

4. Sandbox technology

Sandbox technology is a security protection mechanism that creates an isolated environment in the operating system and restricts programs or applications to run in this environment to prevent them from affecting the operating system and other applications.

The principle of sandbox technology is to isolate an application or process in a virtual container. The file system, network, process and resources in this container are independent of the host machine. In this way, even if an application or process is maliciously attacked or infected with a virus, it will not cause harm to the host or other applications.

Sandbox technology is commonly used in browsers, email clients, virtual machines, games and other applications to enhance their security and stability. For example, the browser can run in a sandbox environment, restricting its access to system resources and files, and preventing malicious websites or plug-ins from attacking the user's computer. Virtual machines can also use sandbox technology to isolate different virtual machines to prevent malicious virtual machines from affecting other virtual machines and host machines.

In short, sandbox technology is a security protection mechanism that can create an isolated environment in the operating system, restrict programs or applications from running in this environment, and prevent its impact on the operating system and other applications. Sandbox technology is commonly used in browsers, email clients, virtual machines, games and other applications to enhance their security and stability.

5. Some exercises (photos)

1

2

3

4

5

Guess you like

Origin blog.csdn.net/weixin_51395608/article/details/131250468