Some questions about the operating system

Some questions about the operating system

1. What is the function of OS?

1. The operating system acts as an interface between the user and the hardware system.
2. The operating system acts as a resource manager.
3. The operating system realizes the abstraction of resources.

2. What is the concept of single-channel batch processing, what problems are solved, and how are they solved? What's the problem?

1. Single-pass batch processing is the automatic processing of a batch of jobs (but only one job in the memory) by the computer system.
2. Solve the contradiction between man-machine and CPU and I/O device speed mismatch.
3. By improving the utilization of system resources and system throughput.
4. System resources cannot be fully utilized.

3. What is the concept of multi-channel batch processing, what problems are solved, and how are they solved?

1. Store multiple independent programs in the memory at the same time, and share various resources in the CPU and the system according to a certain algorithm.
2. Improved resource utilization and system throughput.
3. It can make multiple programs run alternately, keeping the CPU and other resources busy

4. What problems should be solved to realize multiprogramming?

1. Processor management issues
2. Memory management issues
3. I/O device management issues
4. File management issues
5. Job management issues

5. Comparing the characteristics of time-sharing systems and real-time systems, what are the similarities and differences in their meanings?

Both have the four characteristics of multiplexing, interactivity, independence, and timeliness, but the real-time system is additionally reliable. The time-
sharing system is not as reliable as the fault-tolerant mechanism that the real-time system has.

6. Why introduce the concept of process?

This is because the concurrently executed program (that is, the process) is executed in a "stop and go" manner. Only after the process is created for it, when it stops, can its site information be saved in its PCB. When it is scheduled to execute for the first time, the CPU site is restored from the PCB and continues to execute, but the traditional program cannot meet the above requirements.

7. How to understand the connection between concurrency and sharing?

Concurrency and sharing are the conditions for the existence of each other and are the most basic characteristics of the operating system. On the one hand, resource sharing is conditioned on the concurrent execution of programs (processes). If the system does not allow concurrent execution of programs, naturally there will be no problem of resource sharing. On the other hand, if the system cannot effectively manage resource sharing, it will also affect the concurrent execution of programs. If the operating system you use cannot be executed concurrently, you can only open one program at a time, and if you want to open another program, you have to close the previous one, which will be very painful. And when you open multiple programs, you will inevitably use the same resource.

8. What is the concept of synchronous and asynchronous? How to understand the asynchronicity of the operating system?

Asynchronous: In a multi-batch processing system, due to the randomness of scheduling and the independence of execution speed in the concurrent processes, that is, indirect constraints, it causes each process to stop and go. Synchronization: There are direct constraints between multiple processes Relationship, and in order to improve the execution speed, let each process send messages to each other to determine the execution order, without the situation of stop and go, to achieve synchronization. Synchronization does not mean that multiple processes are executed at the same time, but that multiple processes send messages to each other, there will be no chaos of stop and go, and disorder will become order.

9. Why is it said that the operating system is the first virtualization of bare metal?

There is a hierarchical relationship between the hardware, software and software parts of the computer. The hardware is at the bottom layer, and the operating system is the first layer of software on the bare metal, which is the first expansion of hardware functions.

10. Why is it said that processor management is reflected in process management?

Because the processor is a precious resource in the computer, effectively allocating and reclaiming the processor to each process can reflect the system performance.

11. What are the functions of memory management?

Memory allocation and deallocation, memory protection, address mapping and memory expansion

12. What are the functions of device management?

Buffer management, device independence, device allocation, device handling, virtual device functions

13. How to divide program modules? What are the requirements for the module?

Division: a piece of code with relatively independent functions
Requirements: high cohesion, low coupling

14. Briefly describe the working principle of von Neumann computer

stored program plus program control

15. What is an interruption? Why introduce the interrupt mechanism?

Interruption is any emergency event that occurs during system operation, causing the CPU to suspend the program being executed, and automatically transfer to the corresponding event processing program after the scene is reserved.
The purpose of introducing the interrupt mechanism is as follows:
1. It makes it possible to process emergency events in real time
. 2. It can increase the execution efficiency of the processor
. 3. It can simplify the program design of the OS.

16. How does the CPU sense the interrupt request, and how does it respond to the interrupt request?

The interrupt source sends an interrupt request to the CPU. After receiving the interrupt request, the CPU retains the scene information and transfers the corresponding event processing program to process the interrupt source. After processing, it returns the scene information and continues to process the tasks that were not completed before.
What is the interrupt handling process like?
1. Keep the scene in each register
2. Process the interrupt event that triggers the interrupt source, that is, actually execute the specific interrupt service program that serves a certain interrupt source
3. Turn off the interrupt to prevent new interrupt requests during the process of restoring the scene Interrupt, then restore the scene, and then open the interrupt, so that other interrupt requests can be responded to after returning to the original program

17. What does the on-site information of the CPU mainly include?

CPU registers (general registers and instruction registers), program status word PSW and user stack pointer

18. What is the non-reproducibility of program execution results? How to ensure the reproducibility of execution results?

Non-reproducibility: Due to the concurrent execution of the program, it breaks the closure that another program monopolizes system resources, thus destroying reproducibility. Solution: The
Berstein condition must be passed to ensure that the execution result of the program in an asynchronous environment is deterministic

19. Why introduce the concept of process? What are the characteristics of a process?

In order to realize the concurrent execution of multi-programs, a process is the running process of a process entity, an independent unit of resource allocation and scheduling.
Features: dynamic, concurrency, independence, asynchronous, structured

What are the functions of process diagram and predecessor diagram?
Process graph: describes the process family relationship, a node represents a process, and a single line represents the parent-child relationship between processes.
Precursor graph: Indicates the execution sequence between statements or processes.

20. What are the typical causes of process blocking and waking up?

1. Request system services
* Unable to obtain service, the process actively blocks
* The service is completed, and the blocked process is awakened by the service release process
2. Start some kind of operation
* The process actively blocks, waiting for the operation to complete
* The operation is completed, and the interrupt handler wakes up the blocked process
3 , Cooperative data has not yet arrived
* The data of the cooperative process has not arrived, waiting for the process to block
* New data arrives, and the cooperative process wakes up the blocked process.
4. No new work to do
* The system process has no new work to do, actively blocking
* When new work arrives, the system process is woken up

21. What operations need to be performed to create a process?

1. Request a blank PCB
2. Allocate resources for the new process
3. Initialize the PCB
4. Insert the PCB into the queue

22. What problem does the synchronization mechanism of the process solve?

To solve the non-reproducibility (uncertainty) of the results produced by the process when it is executed concurrently, it can also be said that the process is asynchronous.
It realizes the coordination of execution order of multiple related processes.

23. What is a critical section, and what is the purpose of introducing a critical section?

Critical section: A program segment that involves critical resources in a process
Purpose: To allow processes to be mutually exclusive and to achieve mutually exclusive access to critical resources

24. How to use the semaphore mechanism to realize the mutually exclusive use of resources?

Before entering the critical section, the process judges whether the critical resource is occupied by the lock variable
Occupied: the process repeatedly tests the state of W
Otherwise: lock the critical resource, then enter the critical section, and unlock the critical resource

25. How does the mutual exclusion semaphore mechanism reflect the mutual exclusion criterion of resources?

give way when idle, wait when busy, limited wait, give way to wait

26. How does the semaphore mechanism describe process synchronization in cooperative processes?

The V primitive is set after the process of the first operation is executed, and the P primitive is set before the execution of the process of the subsequent operation. This pair of primitives exists at the same time.

27. Discuss the necessity of setting mutex semaphores in the producer-consumer problem.

From the perspective of producers, if multiple producers enter the buffer to perform their own operations, errors such as overwriting may occur; from the perspective of consumers, errors may also occur; purely from the perspective of producers and consumers, a production A consumer and a consumer enter the buffer, and modify their respective buffers. There is no conflict, and there is no need to set a mutex semaphore.

28. What are the ways to avoid deadlock in the dining philosophers problem

1 A maximum of 4 philosophers are allowed to sit at the table at the same time 2 A philosopher is
only allowed to take chopsticks when both the left and right chopsticks can be used
Philosophers do the opposite, and in the end there is always a philosopher who can get two chopsticks and eat

29. How does the reader-writer problem achieve reader priority or writer priority?

Reader priority: the first reader can read, only the first reader blocks the writer, and only the last reader can wake up the writer.
Writer priority: the first writer can read, only the first writer blocks the reader, and only the last writer can wake up the reader.

30. In the direct message communication mechanism, what communication-related data does the PCB of the receiving process need to save?

1. mq (message queue head pointer)
2. mutex (message queue mutual exclusion semaphore)
3. sm (message queue resource semaphore)

31. What are the working principles and basic requirements of pipeline communication?

working principle:

  • A pipe is a shared file (Pipe file) that connects the sending process and the receiving process
  • The sending process writes the message to the pipe as a character stream
  • The receiving process reads data from the pipeline in the order of first-in-first-out
    Basic requirements:
  • Mutual exclusive use of pipes
  • Synchronization of sending process and receiving process
  • Confirm the existence of the communication partner

32. Discuss code description of pipeline communication

Create pipe file
Communication is established and then
connection is made Disconnected by either party after information exchange

33. What is the purpose of introducing threads?

Increase concurrency and reduce concurrency overhead

34. What is the difference and connection between thread and process?

1) The basic unit of scheduling: thread is the basic unit of scheduling and dispatching, and when no thread is set, the process is also an independent unit for resource allocation and scheduling execution (2) Concurrency: both processes and threads can be
concurrent Execution
(3) Owning resources: Processes own resources, threads inherit resources
(4) Independence: Threads in the same process are less independent than threads in different processes
(5) System overhead: Threads have less overhead than processes
(6 ) supports multi-processor systems: multiple threads in a process can be allocated to multiple processors

35. What are the levels of scheduling, and what scopes do they work on?

Job scheduling, memory scheduling, process scheduling

36. What are the timing and reasons for process scheduling?

1. The current running process ends. End normally because the task completed, or end abnormally because an error occurred.
2. The current running process enters the blocking state from the running state due to some reasons, such as I/O requests, P operations, blocking primitives, etc.
3. Return to the user process after executing the system program such as the system call. At this time, it can be regarded as the completion of the system process, so that a new user process can be scheduled.
4. In a system that adopts the preemptive scheduling method, a process with a higher priority requires the use of the processor, so that the current running process enters the ready queue (related to the scheduling method).
5. In a time-sharing system, the time slice allocated to the process has been exhausted (related to the system type).

37. What is the difference and connection between a job and a process?

Difference: A process is an execution of a program, while a job is a task.
Connection: A job usually includes multiple processes, and multiple processes work together to complete a job. One is a static description of the task, and the other is a dynamic description of the task. Complement each other

38. What are the statuses of homework?

fallback status, commit status, execution status, and completion status

39. What are the timing and tasks of job scheduling?

Timing: The number of processes in memory is less than the multipath degree. Task: How many jobs are selected from the backup queue and transferred into the memory depends on the degree of multi-tracking, and which jobs are accepted depends on the scheduling algorithm.

40. What are the two scheduling methods for process scheduling? What is the respective scheduling timing?

1. Non-preemptive:
when scheduling occurs: the running process runs normally/abnormally ends; the process is blocked;
2. when preemptive
scheduling occurs: a process with a higher priority arrives; a shorter process arrives; The time slice has run out.

41. There are two forms of process priority: static priority and dynamic priority. Try to analyze the scheduling timing and scheduling principles of non-preemptive scheduling and preemptive scheduling based on these two priorities.

Scheduling timing of non-preemptive scheduling method: CPU is idle
Scheduling timing of preemptive scheduling method: CPU is idle or a new process arrives
Priority-based preemptive scheduling:
When a new process arrives:
Static priority: New and old processes compare priority
Dynamic priority Right: Recalculate the priority of new and old processes and ready processes
When the CPU is idle:
Static priority: Select the one with the highest priority in the ready queue
Dynamic priority: Recalculate the priority of new processes and ready processes

42. Which scheduling method considers both waiting time and service time?

High Response Ratio Priority Scheduling Algorithm

43. Why is the multi-level feedback queue scheduling algorithm better in overall performance?

1. For terminal users, the job is relatively small and can be completed in the first queue, and the response time is very short.
2. For short batch job users, this type of job is a slightly longer short job, which can be completed in the second or third queue, and the turnaround time is relatively short.
3. For users of long batch jobs, it will always be run, so don't worry about it not being processed for a long time.

44. Discuss the correctness of the lowest slack first algorithm in Figure 3.9 of the textbook.

In my opinion, 3.9 is based on the premise that preemption will occur when there are tasks with a slack of 0 in the real-time task queue. Therefore, at t=10, A1 has been completed, and because the slack of B1 is less than A2, B1 is executed; and when t=30, the slack of A2 is 0, and the scheduler preempts the processor of B1 and schedules A2. In the same way, the following scheduling sequence can be obtained, so 3.9 is self-consistent.

45. What is deadlock? What is the cause of the deadlock?

Deadlock refers to a deadlock caused by multiple processes competing for shared resources. Without external forces, these processes will not be able to move forward. The causes of deadlock are: competition for non-preemptible resources, competition for consumable resources, and improper progress sequence of processes.

46. ​​What are the necessary conditions for a deadlock to occur?

1. Mutual exclusion condition: the process uses the resource exclusively.
2. Request and hold condition: In the dynamic allocation strategy, the process occupies the resource but applies for a new resource.
3. Non-preemption condition: the resource already allocated to the process cannot Preemptive use
4. Loop waiting condition: When a deadlock occurs, a loop must occur in the RAG of the system

47. What are the strategies to prevent deadlocks, and what conditions are they destroyed?

1. Breaking the "request and hold condition"
2. Breaking the "non-preemptible" condition
3. Breaking the "loop wait" condition

48. Briefly describe the workflow of the banker's algorithm

1. Check whether the number of requested resources is reasonable: reject the request unreasonably and report an error, and continue reasonably;
2. Check whether the system has enough available resource allocation; there is no waiting, and there is continuation;
3. Use the corresponding data structure for tentative allocation;
4. Use the security algorithm to detect whether the system is in a safe state after this allocation; if yes, allocate resources, otherwise restore the data structure.

49. How to detect whether there is a deadlock in the process of applying for different types of resources?

1. Find a non-isolated, non-blocking process node in the RAG, remove all edges and turn it into an isolated node
2. Change the request edge waiting for the resource into an allocation edge
3. If all process nodes become isolated node, the resource allocation graph can be completely simplified
4. If the resource allocation graph can be completely simplified, there is no deadlock, otherwise, there is a deadlock in the system

50. What steps must the program go through to run, and what tasks do they complete?

1. Compilation: The source program is compiled by the compiler to obtain the target module of 0,1 code
2. Linking: The linker links a group of target modules and library functions formed after compilation to form an executable loading module
3. Loading : The load module is loaded into memory by the loader

51. Address mapping has static mapping and dynamic mapping, try to compare the advantages and disadvantages of the two

Static mapping has low flexibility, but also has low hardware requirements; dynamic mapping has high flexibility and fast speed, but has high hardware requirements

52. How to implement memory protection with limit register?

Whenever the CPU wants to access the memory, the hardware automatically compares the accessed memory address with the content of the limit register to determine whether it is out of bounds. If there is no out of bounds, access the memory according to this address, otherwise an out of bounds interrupt will be generated.

53. How to obtain the initial partition of fixed partition and dynamic partition allocation?

Fixed partition: Partition description table.
Dynamic partition: free partition table, free chain.

54. What is the purpose of introducing swap technology? What are the two types of swaps?

Enables dynamic scheduling of processes between internal and external storage.
There are two types of overall swap and page swap.

55. How is paging storage management implemented?

The system divides the logical address space of a process into several equal-sized pages, and accordingly, divides the memory space into several physical blocks of the same size as the pages. Fit into multiple physical blocks which may not be contiguous. The logical address is divided into two parts: the page number and the address within the page. When the program is running, in order to find the physical block corresponding to each page in memory, the system creates a page table for each process. Each page of the process occupies an entry in the page table, which records the block number of the memory block corresponding to the corresponding page, and access control information for paging protection. In the paging storage management system, the conversion from logical address to physical address is automatically performed by the hardware address conversion mechanism with the help of page table during process execution.

56. What are the data structures in the paging storage management method, and what are their functions?

Process-oriented: page request table, the entire system has a unified page request table to record the memory usage of all processes.
Memory-oriented: memory block table (free block table, free block chain, bit map).
Correspondence between process and memory: page table, which records the physical block corresponding to each page in memory.

57. How to convert logical address to physical address in paging mode?

When a process wants to access data in a certain logical address, the paging address translation mechanism will automatically divide the effective address (relative address) into two parts, the page number and the address within the page, and then use the page number as an index to search the page table. Lookup operations are performed by hardware. Before executing the search, first compare the page number with the page table. If the page number is greater than or equal to the length of the page table, it means that the address accessed this time has exceeded the address space of the process. Then, this error will be found by the system, and an address out-of-bounds interrupt will be generated. If there is no out-of-bounds error, add the initial address of the page table to the product of the page number and the length of the page table entry to get the position of the entry in the page table, so you can get the physical block number of the page and install it At the same time, the page address in the effective address register is sent to the block address field of the physical address register.

58. What are the benefits of introducing segmented storage management?

1. Convenient for programming,
2. Segmentation facilitates information sharing and information protection
3. Moreover, segments can grow dynamically and link dynamically

59. In the address translation of the segment page storage management method, what are the purposes of the three memory accesses?

The first time: Get the page table address
The second time: Get the physical address of the instruction or data
The third time: Get the instruction or data from the obtained address

60. What is virtual memory? How to measure the capacity of virtual memory?

Virtual memory refers to a memory system that can logically expand the memory capacity with the function of request transfer and replacement. Its logical capacity is determined by the sum of memory capacity and external storage capacity.

61. Explain the theoretical basis for realizing virtual memory.

(1) Based on the principle of locality, the application program does not need to be fully loaded into the memory before running. It is only necessary to load the part of the program and data currently to be run into the memory to start the running of the program, and the rest of the program still resides on the external memory. ; When the instruction to be executed or the data to be accessed is not in the memory, it will be called in by the OS request; if the memory is full, the replacement function will be used to replace the program or data.
(2) Virtual memory must also be based on discrete allocation, and its implementation methods can be divided into request paging, request segmentation, and request segment paging.
.

62. What are the characteristics of virtual memory?

multiplicity swap virtuality

63. What is the function of each field in the page table requesting the paging storage management method?

1) Page number and memory block number: when the page is in memory, it is used for address translation
(2) Interrupt bit: indicates whether the page is in memory or in external storage
(3) External storage address: if the page is in external storage, record the disk block number
(4) Access bit: Record the number of times the page has been accessed recently or how long it has not been accessed, and decide which page to eliminate according to the access bit
(5) Modification bit: Check whether this page has been modified in memory

64. What factors are related to the page fault rate?

1. Page size
2. The number of physical blocks allocated by the process
3. Page replacement algorithm
4. Inherent characteristics of the program

65. Why does the Belady anomaly exist in the FIFO replacement algorithm?

Because the page that resides in the memory the longest does not mean that it has the least access frequency. It may be accessed frequently or just recently. If such a page is swapped out, it may be swapped in frequently in the future, resulting in a low hit rate.

66. What is the idea of ​​the LRU replacement algorithm, and how to realize the "timing" of the page?

The LRU algorithm selects pages that have not been used for the longest time to be eliminated. The "timing" of the page is implemented using registers. Configure a shift register for each page in memory. When a process accesses a physical block, the Rn-1 position of the corresponding register is set to 1. At this time, the timing signal will shift the register to the right by one bit at regular intervals. If we regard the number of n-bit registers as an integer, then the page corresponding to the register with the smallest value is the page that has not been used the most recently.

67. Briefly describe the execution process of the improved Clock algorithm.

First set an access bit for each page, and then link all pages through pointers to form a circular queue. When the page is not accessed, the access bit is 1. When the page needs to be swapped out, the replacement pointer starts from the original position. , each time to judge whether the access bit of the page pointed to is 0, if it is 0, it will be swapped out, if it is 1, it will be set to 0, and the next page will be searched, if it reaches the end of the queue, it will start searching from the beginning

68. What is the reason for the jitter?

When the multi-channel is too high, the pages are frequently scheduled between the internal memory and the external memory, so that the CPU is difficult to work effectively, resulting in a sharp drop in system efficiency or even a system crash, that is, jitter.

69. Why can the introduction of working set mechanism prevent jitter?

When the process initially allocates memory blocks, allocate the number of memory blocks greater than or equal to the working set, so that it can start working with a relatively low page fault rate.

70. How to deal with the address out-of-bounds interrupt requesting segmented storage management mode?

When processing the address out-of-bounds interrupt, first judge the extension bit of the segment, if it can be expanded, increase the length of the segment, otherwise an out-of-bounds interrupt occurs, and perform error handling

71. How to implement segment sharing?

A shared segment table is configured, and the number of processes sharing the segment is recorded in each shared segment table, and the table includes various information of the segment, as well as information about the use of the shared segment by each process.

72. What are the levels of I/O system software?

User-level I/O software
Device-independent software
Device drivers
Interrupt handlers

73. Briefly describe the composition and principle of the character device controller.

Composition: registers, I/O logic, and interfaces.

74. What is the purpose of introducing the channel?

In order to establish an independent I/O operation, not only the data transmission can be independent of the CPU, but also the organization, management and end processing of the I/O operation should be as independent as possible to ensure that the CPU has more time to process data. deal with

75. How does the operating system recognize the interrupt request and how to call the interrupt handler?

Each interrupt source uses a fixed flip-flop to register the interrupt signal, which is called an interrupt bit - a value of 1 means there is an interrupt signal, and a value of 0 means no. When a certain interrupt source requires the CPU to perform interrupt service for it, it outputs an interrupt request signal to set the interrupt request trigger and request an interrupt to the CPU. Each interrupt has an interrupt number associated with it, and an interrupt handler associated with it, and the interrupt handler for each interrupt is stored in an interrupt vector table in the order of the interrupt number. When responding to an interrupt, the system will search the interrupt vector table according to the interrupt number to obtain the entry address of the corresponding interrupt handler, so that it can be transferred to the interrupt handler for execution.

76. What is the difference between an interrupt and a trap?

1. The trap is caused by an instruction being executed by the processor, while the interrupt is caused by an interrupt source that has nothing to do with the current instruction. 2.
The service provided by the trap handler is used by the current process, but the service provided by the interrupt handler is not for the current process.
3. CPU Traps can be responded to during instruction execution, but interrupts must be responded to after instruction execution
What is the main function of a device driver?
1. Receive commands and parameters from device-independent software, and convert the abstract requirements in the commands into device-related low-level operation sequences 2. Check the legality of
user I/O requests and understand the status of I/O devices Working status, transfer parameters related to I/O device operation, set the working mode of the device
3. Issue I/O command, if the device is idle, start the I/O device immediately to complete the specified I/O operation; if the device is busy , then hang the requester’s request block on the device queue and wait
4. Respond in time to the interrupt request sent by the device controller, and call the corresponding interrupt handler for processing according to the interrupt type

77. What is the difference between the interrupt-driven I/O control method and the DMA I/O control method?

1. The interrupt method is to transfer data in units of characters, while the DMA method is to transfer data in units of blocks.
2. The interrupt method is to send an interrupt signal after the DR is full, while the DMA method is to send an interrupt signal after the data block transfer is completed. 3
. The data transmission in the interrupt mode is completed by the interrupt handler under the control of the CPU, while the data transmission in the DMA mode is completed by stealing the CPU cycles under the control of the controller.

78. In the DMA control mode, how does the data entering the controller DR be transferred to the memory?

Continuously embezzle CPU cycles, write the data in DR to the specified unit of memory until DC=0

79. Under what circumstances do you need to steal CPU cycles in the channel control mode?

1. The channel gets the next channel instruction
2. The data controller transfers the data to the memory

80. How does the operating system implement the mapping from logical device names to physical device names?

Configure a logical device table in the system, each entry contains three items - logical device name, physical device name and the entry address of the device driver. When a process requests allocation of an I/O device with a logical device name, the system allocates a corresponding physical device for it according to the specific situation at that time. At the same time, create an entry on the logical device table, fill in the logical device name used in the application program, the physical device name assigned by the system, and the entry address of the device driver. When the process uses the logical device name to request an I/O operation, the system can find the physical device corresponding to the logical device and the driver of the device by searching the LUT.

81. What is the allocation process of exclusive equipment?

1. Distribution device
2. Distribution controller
3. Distribution channel

82. What is the principle and purpose of the SPOOLing technology implemented by the user layer?

The purpose is to realize the virtual device function

83. What is the purpose of introducing a buffer?

1. Alleviate the contradiction between the speed mismatch between CPU and I/O devices
2. Reduce the interrupt frequency of CPU and relax the restriction on CPU interrupt response time
3. Solve the problem of data granularity mismatch
4. Improve CPU and I/O devices parallelism between

84. Briefly describe the basic composition and working process of the buffer pool.

1. Form
three buffer queues

85. Idle buffer queue emq: a queue linked by empty buffers

Input queue inq: a queue formed by buffers filled with input data Output
queue outq: a queue formed by buffers filled with output data
Four working buffers
input storage hin
output storage hout
input extraction sin
output extraction out

86. Briefly describe the format of the disk and the structure of the physical sector number

1) The hard disk has several discs, each disc has two sides, and each side has a magnetic head
(2) The disc is divided into multiple fan-shaped areas, namely sectors
(3) Concentric circles with different radii on the same disc are tracks
(4 ) A cylindrical surface formed by different disks with the same radius, that is, a cylindrical surface.
How do logical sector numbers and physical sector numbers convert to each other?
Calculate cylinder number -> track number -> sector number from logical sector number

87. How to calculate disk access time

Tour time + average rotation delay time + transmission time

88. Compare the advantages and disadvantages of FCFS, SSTF, and SCAN disk scheduling algorithms.

First come first served algorithm (FCFS)
This is a relatively simple disk scheduling algorithm. It schedules according to the order in which processes request access to the disk. The advantage of this algorithm is that it is fair and simple, and the requests of each process can be processed in turn, and there will be no situation where the request of a certain process cannot be satisfied for a long time. Since this algorithm does not optimize the seek, in the case of a large number of disk access requests, this algorithm will reduce the throughput of the device service, resulting in a longer average seek time, but the response time of each process to get the service The change is small.
The shortest seek time first algorithm (SSTF)
This algorithm selects such a process that requires the track to be accessed to be the closest to the track where the current head is located, so that the seek time is the shortest each time, and this algorithm can get better throughput. But there is no guarantee that the average seek time is the shortest. Its disadvantage is that the chances of responding to the user's service request are not equal, resulting in a large variation in response time. In the case of a large number of service requests, requests to the inner and outer edge tracks will be delayed indefinitely, and the response time of some requests will be unpredictable.
Scanning Algorithm (SCAN)
The scanning algorithm not only considers the distance between the track to be accessed and the current track, but also the current moving direction of the magnetic head. This algorithm basically overcomes the shortcomings of the shortest seek time first algorithm that the service is concentrated in the middle track and the response time varies greatly, and has the advantages of the shortest seek time first algorithm, that is, the throughput is large and the average response time is small, but Due to the wobble scanning method, the tracks on both sides are still accessed less frequently than the middle track

89. What is the management object of the file system?

File, directory, disk (tape) storage space

90. How to divide the logical structure types of files

1. Classify according to whether the file has a structure
2. Classify according to the organization of the file

91. How to access structured sequential files and index files respectively

For a fixed-length sequential file, read the file: set a read pointer Rptr to point to the first address of the next record, and execute Rptr=Rptr+1 operation every time a record is read, L is the record length; write the file : Set a write pointer Wptr to point to the first address of the record to be written, and execute Wptr=Wptr+1 operation every time a record is written.
For files with variable-length records, the read-write pointer is also set. After each record is read or written, the length of the record just read or just written is added to the read or write pointer.
For index files, keywords are used to create an index table, and the index table is searched by half-search method according to the keyword provided by the user (program), and the corresponding table item is found. Then use the pointer value to the record given in the entry to access the required record. And whenever a new record is added to the index file, the index table must be modified.

92. What are the basic functions of the file directory

(1) Realize access by name
(2) Improve the retrieval speed of directories
(3) File sharing
(4) Allow duplicate file names

93. What is the purpose of introducing index nodes?

To reduce the number of disk boots when indexing files

94. What are the two ways to share files?

1. Use index nodes to realize file sharing
2. Use symbolic links to realize file sharing
How to implement file access matrix protection
1. Access control table
2. Access authority table

95. What is a transaction? What is the purpose of setting up a transaction?

(1) A transaction is used to access and modify various data items - a program unit
(2) A transaction can also be seen as a series of related read and write operations
(3) The execution of a transaction is atomic

Guess you like

Origin blog.csdn.net/m0_58235748/article/details/131006232