Computer Operating System (3rd Edition) Answers to Homework Exercises (Full Version)

Chapter One

1. What are the main goals in designing a modern OS?

Answer: (1) Effectiveness (2) Convenience (3) Scalability (4) Openness

2. In which aspects can the role of OS be manifested?

Answer: (1) OS as the interface between the user and the computer hardware system

(2) OS as the manager of computer system resources

(3) OS implements the abstraction of computer resources

3. Why is it said that the OS implements the abstraction of computer resources?

Answer: The OS first covers a layer of I/O device management software on the bare metal, realizing the first level of abstraction for computer hardware operations.

Image; the file management software is overlaid on the first layer of software to realize the second level of abstraction of hardware resource operations. OS communication

By installing multi-layer system software on the computer hardware, the system function is enhanced, and the details of the hardware operation are hidden.

Together, they realize the abstraction of computer resources.

4. What is the main driving force for the formation and development of the multi-channel batch processing system?

Answer: The main driving force comes from social needs and technological development in four aspects:

(1) Continuously improve the utilization rate of computer resources;

(2) User-friendly;

(3) Continuous updating of devices;

(4) Continuous development of computer architecture.

5. What is offline I/O and online I/O?

Answer: Offline I/O refers to loading the tape or card with user programs and data into the tape input machine or card machine in advance.

Under the control of the peripheral machine, the data or program on the paper tape or card is input to the tape. The input and output in this mode are controlled by the peripheral

The control of the machine is completed, which is carried out without the host.

The online I/O mode means that the input and output of programs and data are carried out under the direct control of the host.

6. What is the main driving force for the formation and development of the time-sharing system?

Answer: The main driving force for the formation and development of time-sharing systems is to better meet the needs of users. Mainly manifested in: CPU

Time-sharing shortens the average turnaround time of jobs; human-computer interaction enables users to directly control their jobs;

Sharing enables multiple users to use the same computer at the same time and process their own jobs independently.

7. What are the key issues in implementing a time-sharing system? How should it be solved?

Answer: The key issue is that when a user types a command on his own terminal, the system should be able to receive and process the command in time,

The result is returned to the user within a time delay acceptable to the user.

Solution: To solve the problem of receiving in time, you can set up multiple cards in the system, so that the host can simultaneously receive the

端上输入的数据;为每个终端配置缓冲区,暂存用户键入的命令或数据。针对及时处理问题,

应使所有的用户作业都直接进入内存,并且为每个作业分配一个时间片,允许作业只在自己

的时间片内运行,这样在不长的时间内,能使每个作业都运行一次。

8.为什么要引入实时OS?

答:实时操作系统是指系统能及时响应外部事件的请求,在规定的时间内完成对该事件的处

理,并控制所有实时任务协调一致地运行。引入实时OS 是为了满足应用的需求,更好地满

足实时控制领域和实时信息处理领域的需要。

9.什么是硬实时任务和软实时任务?试举例说明。

答:硬实时任务是指系统必须满足任务对截止时间的要求,否则可能出现难以预测的结果。

举例来说,运载火箭的控制等。

软实时任务是指它的截止时间并不严格,偶尔错过了任务的截止时间,对系统产生的影

响不大。举例:网页内容的更新、火车售票系统。

10.在8位微机和16位微机中,占据了统治地位的是什么操作系统?

答:单用户单任务操作系统,其中最具代表性的是CP/M和MS-DOS.

11.试列出Windows OS中五个主要版本,并说明它们分别较之前一个版本有何改进。

答:

(1)Microsoft Windows 1.0是微软公司在个人电脑上开发图形界面的首次尝试。

(2)Windows 95是混合的16位/32位系统,第一个支持32位。带来了更强大、更稳

定、更实用的桌面图形用户界面,结束了桌面操作系统间的竞争。

(3)Windows 98是微软公司的混合16位/32位Windows操作系统,改良了硬件标准

的支持,革新了内存管理,是多进程操作系统。

(4)Windows XP是基于Windows2000的产品,拥有新用户图形界面月神Luna。简

化了用户安全特性,整合了防火墙。

(5)Windows Vista 包含了上百种新功能;特别是新版图形用户界面和WindowsAero

全新界面风格、加强的搜寻功能(Windows IndexingService)、新媒体创作工具以及重

新设计的网络、音频、输出(打印)和显示子系统。。

12. Compare time-sharing systems with real-time systems in terms of interactivity, timeliness, and reliability.

Answer: (1) Timeliness: The real-time requirements of real-time information processing systems are similar to those of time-sharing systems, which are acceptable to humans.

The timeliness of the real-time control system is determined by the start deadline or completion time required by the control object.

It is determined by the cut-off time, which is generally at the level of seconds to milliseconds, and some are even lower than 100 microseconds.

(2) Interactivity: The real-time information processing system is interactive, but the interaction between people and the system is limited to accessing certain information in the system.

Some specific dedicated service programs. Unlike the time-sharing system, it can provide end users with services such as data and resource sharing.

(3) Reliability: The time-sharing system also requires the system to be reliable, but in contrast, the real-time system requires the system to be highly

reliability. Because any error may bring huge economic loss, even catastrophic consequences, so in the real-time system

In the process, multi-level fault-tolerant measures are often taken to ensure the security of the system and the security of data.

13. What are the characteristics of OS? What are its most basic characteristics?

Answer: The four basic characteristics are concurrency, sharing, virtuality and asynchrony; the most basic characteristic is concurrency.

14. What are the main functions of processor management? What is their main task?

Answer: The main functions of processor management are: process management, process synchronization, process communication and processor scheduling;

Process management: Create a process for a job, revoke an ended process, and control the state transition of a process during operation.

Process Synchronization: Coordinating the running of ______________ for multiple processes (including threads).

Communication: Used to exchange information between cooperating processes.

Processor scheduling:

(1) Job scheduling. Select a number of jobs from the reserve team according to a certain algorithm, and assign them the running requirements

resources (allocated memory is preferred).

(2) Process scheduling: Select a process from the ready queue of the process according to a certain algorithm, and assign the processor to

It, and sets the running scene, puts the process into execution.

15. What are the main functions of memory management? What are their main tasks?

Answer: The main functions of memory management are: memory allocation, memory protection, address mapping and memory expansion.

Memory allocation: allocate memory for each program.

Memory protection: ensure that each user program runs only in its own memory space without interfering with each other.

Address mapping: convert the logical address of the address space into the memory space and the corresponding physical address.

Memory expansion: used to implement request call function, replacement function, etc.

16. What are the main functions of device management? What is its main task?

答:主要功能有: 缓冲管理、设备分配和设备处理以及虚拟设备等。

主要任务: 完成用户提出的I/O 请求,为用户分配I/O 设备;提高CPU 和I/O 设

备的利用率;提高I/O速度;以及方便用户使用I/O设备.

17.文件管理有哪些主要功能?其主要任务是什么?

答:文件管理主要功能:文件存储空间的管理、目录管理、文件的读/写管理和保护。

文件管理的主要任务:管理用户文件和系统文件,方便用户使用,保证文件安全性。

18.是什么原因使操作系统具有异步性特征?

答:操作系统的异步性体现在三个方面:一是进程的异步性,进程以人们不可预知的速度向

前推进,二是程序的不可再现性,即程序执行的结果有时是不确定的,三是程序执行时间的

不可预知性,即每个程序何时执行,执行顺序以及完成时间是不确定的。

19.模块接口法存在哪些问题?可通过什么样的途径来解决?

答:(1)模块接口法存在的问题:①在OS设计时,各模块间的接口规定很难满足在模块完

成后对接口的实际需求。②在OS 设计阶段,设计者必须做出一系列的决定,每一个决定必

须建立在上一个决定的基础上。但模块化结构设计的各模块设计齐头并进,无法寻找可靠的

顺序,造成各种决定的无序性,使程序设计人员很难做到设计中的每一步决定都建立在可靠

的基础上,因此模块接口法被称为“无序模块法”。

(2)解决途径:将模块接口法的决定顺序无序变有序,引入有序分层法。

20.在微内核OS中,为什么要采用客户/服务器模式?

答:C/S 模式具有独特的优点:⑴数据的分布处理和存储。⑵便于集中管理。⑶灵活性和

可扩充性。⑷易于改编应用软件。

21.试描述什么是微内核OS。

答:1)足够小的内核 2)基于客户/服务器模式

3)应用机制与策略分离原理 4)采用面向对象技术。

22.在基于微内核结构的OS中,应用了哪些新技术?

答:在基于微内核结构的OS 中,采用面向对象的程序设汁技术。

23.何谓微内核技术?在微内核中通常提供了哪些功能?

答:把操作系统中更多的成分和功能放到更高的层次(即用户模式)中去运行,而留下一个

The smallest possible kernel is used to complete the most basic core functions of the operating system. This technology is called microkernel technology. in micro

Functions such as process (thread) management, low-level memory management, interrupt and trap handling are usually provided in the core.

twenty four. What are the advantages of a microkernel operating system? Why does it have these advantages?

Answer: 1) Improve the scalability of the system

2) Enhanced system reliability

3) Portability

4) Provides support for distributed systems

5) Incorporating object-oriented technology

Chapter two

  1. What is a forward trend graph? Why introduce a forward trend graph?

Answer: Precedence Graph is a directed acyclic graph, recorded as DAG (DirectedAcyclic

Graph), used to describe the context of execution between processes.

  1. Draw the predecessor graph of the following four statements:

S1=a:=x+y; S2=b:=z+1; S3=c:=a – b;S4=w:=c+1;

Answer: Its predecessor diagram is:

  1. What program concurrently executes will produce discontinuous characteristics?

Answer: When programs are executed concurrently, because they share system resources, they need to cooperate with each other to complete the same task, resulting in this

Between some concurrently executed processes, a mutual restriction relationship is formed, which makes the process intermittent during execution.

4. Why do programs lose closure and reproducibility when they execute concurrently?

Answer: When programs are executed concurrently, multiple programs share various resources in the system, so the state of these resources is changed by multiple programs.

Changes will cause the program to lose its closure and also cause it to lose its reproducibility.

5. Why is the process concept introduced in the operating system? What kind of impact will it have?

Answer: In order to enable the program to execute concurrently in a multi-programming environment, and to control and describe the concurrently executed program, in the operation

The concept of process is introduced in the operating system.

Impact: Enables concurrent execution of programs.

6. Try to compare processes and programs in terms of dynamism, concurrency and independence?

Answer: (1) Dynamicity is the most basic characteristic of a process, which is manifested as being generated by creation and executed by scheduling.

And suspend execution, and die by revocation. A process has a certain life span, while a program is just an ordered set of instructions, which is a static

state entity.

(2) Concurrency is an important feature of a process, and it is also an important feature of an OS. The purpose of introducing the process is to make the

Its programs can be executed concurrently with programs of other processes, but programs cannot be executed concurrently.

(3) Independence means that the process entity is a basic unit that can run independently, and it is also a system that obtains resources independently and independently.

The basic unit of independent scheduling. A program that has not established any process cannot participate in running as an independent unit.

7. Try to explain the role of the PCB, why is the PCB the only sign of the existence of the process?

Answer: PCB is part of the process entity and the most important record-type data structure in the operating system. The effect is to make a

A program that cannot run independently in a multiprogramming environment becomes a basic unit that can run independently, and becomes a unit that can communicate with other processes.

concurrently executing processes. The OS controls and manages the concurrently executed processes according to the PCB.

8. State the typical reasons why a process transitions between the three basic states.

Answer: (1) Ready state → Execution state: the process is allocated to CPU resources

(2) Execution state → ready state: the time slice is exhausted

(3) Execution state → blocking state: I/O request

(4) Blocking state → ready state: I/O completed

9. Why is suspend state introduced? What are the properties of this state?

Answer: There are five different needs to introduce the suspend state: end user needs, parent process needs, operating system needs, swap

Computer Department, School of Information Engineering, Beijing Institute of Petrochemical Technology 5/48

"Computer Operating System" exercise reference answers compiled by Yu Youming and the students of Ji 07 and Ji G09 5/48

needs and load regulation needs. A process in the suspended state cannot receive processor scheduling.

10. What kind of processor state information should be saved during process switching?

Answer: When performing process switching, the processor state information to be saved includes:

(1) The current temporary storage information of the process

(2) Next instruction address information

(3) Process status information

(4) Process and system call parameters and call address information.

11. Describe the main events that lead to process creation.

Answer: The main events that cause process creation are: user login, job scheduling, service provision, and application request.

12. Describe the main event that causes a process to be terminated.

Answer: The main events that cause the process to be canceled are: normal end, abnormal end (out-of-bounds error, protection error, illegal instruction,

privileged instruction error, run timeout, wait timeout, arithmetic operation error, I/O failure), external intervention (operator or operating system

system intervention, parent process request, parent process termination).

13. What is the main work to be done when creating a process?

answer:

(1) After the OS finds a request to create a new process event, it calls the process creation primitive Creat();

(2) Apply for a blank PCB;

(3) Allocate resources for the new process;

(4) Initialize the process control block;

(5) Insert the new process into the ready queue.

14. What is the main work to be done when killing a process?

answer:

(1) According to the terminated process identifier, retrieve the process PCB from the PCB set, and read the process status.

(2) If the terminated process is in the execution state, immediately terminate the execution of the process, set the scheduling flag true, indicating that the process is terminated

Rescheduled after termination.

(3) If the process has child processes, all descendant processes should be terminated to prevent them from becoming uncontrollable processes.

(4) Return all resources owned by the terminated process to the parent process or to the system.

(5) Remove the terminated process PCB from its queue or list, and wait for other programs to collect information.

15. What is the main event that causes the process to block or be woken up?

Answer: a. Requesting a system service; b. Initiating some kind of operation; c. New data has not yet arrived; d. No new work to do.

16. What two forms of constraints exist for a process at runtime? And illustrate it with examples.

answer:

(1) Indirect mutual restriction relationship. Example: There are two processes A and B, if A makes a printing request, the system has sent the unique

If a printer is assigned to process B, process A can only block; once B releases the printer, A changes from blocking to

thread.

(2) Direct mutual restriction relationship. Example: There is an input that process A provides data to process B through single buffering. When the buffer is empty,

The calculation process is blocked because it cannot obtain the required data. When process A enters the data into the buffer, it wakes up process B;

In other words, when the buffer is full, process A blocks because there is no buffer to put data in, and process B takes the buffer data away and then

Wake up A.

17. Why should a process execute "entry section" code before entering a critical section? Before exiting, execute "Exit

area" code?

Answer: In order to realize the mutually exclusive access of multiple processes to critical resources, it is necessary to add a section in front of the critical section to check the access

Whether the critical resource is being accessed, if not, the process can enter the critical section to access the resource,

And set the flag being accessed. If it is being accessed, the process cannot enter the critical section. The code to realize this function is "

Enter the area" code; after exiting the critical area, the "exit area" code must be executed to restore the unvisited flag so that other processes can access this critical resource again.

  1. What are the basic guidelines that a synchronization agency should follow? Why?

Answer: The basic principles that the synchronization mechanism should follow are: give in when idle, wait when busy, limited wait, and wait with right

Reason: In order to realize mutual exclusion of the process, it enters its own critical section.

  1. Try to explain the record semaphore wait and signal from the physical concept.

Answer: wait(S): When S.value>0, it means that such resources are still available in the system. perform a wait operation

作,意味着进程请求一个单位的该类资源,使系统中可供分配的该类资源减少一个,因此描

述为S.value:=S.value-1;当S.value<0时,表示该类资源已分配完毕,进程应调用block

原语自我阻塞,放弃处理机,并插入到信号量链表S.L中。

signal(S):执行一次signal操作,意味着释放一个单位的可用资源,使系统中可供分配

的该类资源数增加一个,故执行S.value:=S.value+1 操作。若加1后S.value≤0,则表

示在该信号量链表中,仍有等待该资源的进程被阻塞,因此应调用wakeup 原语,将S.L

链表中的第一个等待进程唤醒。

20.你认为整型信号量机制是否完全遵循了同步机构的四条准则?

答:整型信号量机制不完全遵循同步机制的四条准则,它不满足“让权等待”准则。

21.如何利用信号量机制来实现多个进程对临界资源的互斥访问?并举例说明之。

答:为使多个进程互斥访问某临界资源,只需为该资源设置一互斥信号量mutex,并设其

初值为1,然后将各进程访问该资源的临界区CS置于wait(mutex)和signal(mutex)操作

之间即可。这样,每个欲访问该临界资源的进程在进入临界区之前,都要先对mutex 执行

wait 操作,若该资源此刻未被访问,本次wait 操作必然成功,进程便可进入自己的临界区,

这时若再有其他进程也欲进入自己的临界区,此时由于对mutex 执行wait操作定会失败,

因而该进程阻塞,从而保证了该临界资源能被互斥访问。当访问临界资源的进程退出临界区

后,应对mutex执行signal 操作,释放该临界资源。利用信号量实现进程互斥的进程描述

如下:

Var mutex: semaphore:=1;

begin

parbegin

process 1: begin

repeat

wait(mutex);

critical section

signal(mutex);

remainder section

until false;

end

process 2: begin

repeat

wait(mutex);

critical section

signal(mutex);

remainder section

until false;

end

parend

twenty two. Try to write the corresponding program to describe the predecessor graph shown in Figure 2-17.

答:(a)Var a, b, c, d, e, f, g, h; semaphore:= 0, 0,0, 0, 0, 0, 0, 0;

begin

par begin

begin S1; signal(a); signal(b); end;

begin wait(a); S2; signal©; signal(d); end;

begin wait(b); S3; signal(e); end;

begin wait©; S4; signal(f); end;

begin wait(d); S5; signal(g); end;

begin wait(e); S6; signal(h); end;

begin wait(f); wait(g); wait(h); S7; end;

parent

end

(b)Var a, b, c, d, e, f, g, h,i,j; semaphore:= 0,0, 0, 0, 0, 0, 0,0,0, 0;

begin

par begin

begin S1; signal(a); signal(b); end;

begin wait(a); S2; signal©; signal(d); end;

begin wait(b); S3; signal(e); signal(f); end;

begin wait©; S4; signal(g); end;

begin wait(d); S5; signal(h); end;

begin wait(e); S6; signal(i); end;

begin wait(f); S7; signal(j); end;

begin wait(g);wait(h); wait(i); wait(j); S8;end;

parent

end

twenty three. In the producer-consumer problem, if there is a lack of signal (full) or signal (empty), what is the impact on the execution result?

answer:

If signal(full) is missing, it indicates that the semaphore full value has not been changed since the first producer process,

Even if the buffer pool product is full, the full value is still 0, so that the consumer process thinks that the buffer pool is empty when it executes wait(full).

However, the product cannot be obtained, and the consumer process has been in a waiting state.

If there is a lack of signal (empty), the consumer process will start from the producer process after the producer process has filled the n buffers with products

Take the product, at this time empty=0, full=n, then every time the consumer process takes a product, the empty value does not change,

Until the buffer pool is empty, the empty value is also 0. Even if there are n empty buffers in the buffer pool, the producer process wants to

Putting products into the buffer pool will also be blocked because the empty buffer cannot be applied for.

twenty four. In the producer-consumer problem, if two wait operations, wait(full) and wait(mutex) are swapped,

Or swap signal(mutex) and signal(full), what is the result?

Answer: After swapping wait(full) and wait(mutex), deadlock may occur. Considering that the buffer in the system is full,

If a producer process executes the wait(mutex) operation first and succeeds, when the wait(empty) operation is executed again,

It will enter the blocking state due to failure, and it expects the consumer process to execute signal(empty) to wake itself up. Before that,

It is impossible to perform the signal(mutex) operation, so that it tries to enter its own temporary by executing the wait(mutex) operation.

All other producers and all consumer processes in the bounded area enter the blocked state, which is likely to cause system deadlock.

If signal(mutex) and signal(full) exchange positions, it only affects the release sequence of critical resources by processes, and

Will not deadlock the system, so the positions can be interchanged.

25.We are setting a lock W for a certain critical resource. When W=1, it means that the lock is closed, and when W=0, it means that the lock is opened.

Try to write primitives for unlocking and closing locks, and use them to implement mutual exclusion.

Answer: Integer semaphore: lock(W): while W=1 do no-op

W:=1;

unlock(W): W:=0;

Recorded semaphore: lock(W): W:=W+1;

if(W>1) then block(W, L)

unlock(W): W:=W-1;

if(W>0) then wakeup(W, L)

example:

Var W:semaphore:=0;

begin

repeat

lock(W);

critical section

unlock(W);

remainder section

until false;

end

26.Try to fix the error in the solution of the following producer-consumer problem:

Answer: producer:

begin

repeat

producer an item in nextp;

wait(mutex);

wait(full);

buffer(in):=nextp;

signal(mutex);

until false;

end

consumer:

begin

repeat

wait(mutex);

wait(empty);

nextc:=buffer(out);

out:=out+1;

signal(mutex);

consumer item in nextc;

until false;

end

27.Try to write an algorithm for the dining philosophers problem without deadlock using record semaphores.

答:Var chopstick:array[0,…,4] of semaphore;

All semaphores are initialized to 1, and the activities of the i-th philosopher can be described as:

Repeat

Wait(chopstick[i]);

Wait(. chopstick[(i+1) mod 5]);

Ea.t ;

Signal(chopstick[i]);

Signal(chopstick[(i+1) mod 5])

Ea.t ;

Think;

Until false;

28.In the data acquisition task in the measurement control system, the collected data is sent to a single buffer; the calculation task starts from the single buffer

Take the data out of the buffer for calculation. Try to write a synchronization algorithm that uses the semaphore mechanism to realize the two share a single buffer.

answer:

a. Var mutex, empty, full: semaphore:=1, 1, 0;

gather:

begin

repeat

……

gather data in nextp;

wait(empty);

wait(mutex);

buffer:=nextp;

signal(mutex);

signal(full);

until false;

end

compute:

begin

repeat

……

wait(full);

wait(mutex);

nextc:=buffer;

signal(mutex);

signal(empty);

compute data in nextc;

until false;

end

b. Var empty, full: semaphore:=1, 0;

gather:

begin

repeat

……

gather data in nextp;

wait(empty);

buffer:=nextp;

signal(full);

until false;

end

compute:

begin

repeat

……

wait(full);

nextc:=buffer;

signal(empty);

compute data in nextc;

until false;

end

29.Draw a picture to illustrate which parts the monitor consists of, and why should condition variables be introduced?

Answer: The monitor is composed of four parts: ①The name of the monitor; ②The description of the shared data structure inside the monitor;

A set of procedures that operate according to the structure; ④ A statement that sets an initial value for the shared data that is local to the monitor;

When a process calls the monitor, it is blocked or suspended in the monitor until the cause of the block or suspension is removed, and during this period

During this time, if the process does not release the monitor, other processes cannot enter the monitor and are forced to wait for a long time. In order to solve this

Problem, the condition variable condition is introduced.

30.How to use monitors to solve the producer and consumer problem?

Answer: First create a monitor, named ProclucerConsumer, including two processes:

(1) Put (item) process. Producers use this process to put their own products into the buffer pool, and use the integer variable

The quantity count indicates the number of products in the buffer pool. When count≥n, it means that the buffer pool is full, and the producer must

wait.

(2) get (item) process. Consumers use this process to take out a product from the buffer pool, when count≤0

When , it means that there is no more desirable product in the buffer pool, and the consumer should wait.

The PC monitor can be described as follows:

type producer-consumer =monitor

Var in,out,count:integer;

buffer:array[0,…,n-1]of item;

notfull,notempty:condition;

procedure entry dot(item)

begin

if count>=n then not full.wait;

buffer(in):=nextp;

in:=(in+1)mod n;

count:=count+1;

if notempty.queue then notempty.signal;

end

procedure entry get(item)

begin

if count<=0 then not full.wait;

nextc:=buffer(out);

out:=(out+1)mod n;

count:=count-1;

if notfull.quene then notfull.signal;

end

begin in:=out:=0;

count:=0

end

When using monitors to solve the producer-consumer problem, the producers and consumers can be described as:

producer: begin

pepe

produce an inem in nestp

PC.put(item);

until false;

end

consumer: begin

repeat

PC.get(item);

consume the item in enxtc;

until false;

end

31.What is an AND semaphore? Write a solution to the producer-consumer problem using AND semaphores.

Answer: In order to solve the deadlock problem caused by parallelism, the AND condition is introduced in the wait operation. The basic idea is to

All the critical resources needed by the process during the entire running process are all allocated to the process at one time, and released at one time after use.

Solving the producer-consumer problem can be described as follows:

var mutex,empty,full: semaphore:=1,n,0;

buffer: array[0,…,n-1] of item;

in,out: integer:=0,0;

begin

par begin

producer: begin

repeat

produce an item in nextp;

wait(empty);

wait(s1,s2,s3,...,sn); //s1,s2,...,sn are the conditions for executing the producer process except empty

wait(mutex);

buffer(in):=nextp;

in:=(in+1) mod n;

signal(mutex);

signal(full);

signal(s1,s2,s3,...,sn);

until false;

end

consumer: begin

repeat

wait(full);

wait(k1,k2,k3,…,kn); //k1,k2,…,kn are the conditions for executing the consumer process except full

wait(mutex);

nextc:=buffer(out);

out:=(out+1) mod n;

signal(mutex);

signal(empty);

signal(k1,k2,k3,…,kn);

consume the item in nextc;

until false;

end

parent

end

32.What is a semaphore set? Try to write a solution to the reader-writer problem using semaphore sets.

Answer: The reading and writing mechanism of the semaphore set formed by expanding the AND semaphore.

Solving method: Var RN integer;

L,mx: semaphore:=RN,1;

begin

par begin

reader:begin

repeat

Swait(L,1,1);

Swait(mx,1,1);

perform read operation;

Ssignal(L,1);

until false

end

writer:begin

repeat

Swait(mx,1,1;L,RN,0);

perform write operation;

Ssignal(mx,1);

until false

end

parent

end

33.Try comparing low-level and high-level communication tools between processes.

Answer: It is inconvenient for users to use low-level communication tools to realize process communication, the efficiency is low, the communication is opaque to users, and all operations are

It must be implemented by programmers, and advanced communication tools make up for these shortcomings, and users directly use a set of functions provided by the operating system

Communication commands to efficiently transfer large amounts of data.

34.What advanced communication mechanisms are currently available?

Answer: Shared memory systems, message passing systems, and pipe communication systems.

35.What are the functions of the message queue communication mechanism?

Answer: (1) Form a message (2) Send a message (3) Receive a message (4) Mutual exclusion and synchronization.

36.Why are threads introduced in the OS?

Answer: The introduction of threads in the operating system is to reduce the time and space overhead of programs during concurrent execution, so that the OS has

There is better concurrency and improved CPU utilization. A process is the basic unit of resource allocation, while a thread is scheduled by the system.

basic unit.

37.Try to explain what attributes a thread has?

Answer: (1) Lightweight entity (2) Basic unit of independent scheduling and dispatch (3) Concurrent execution (4) Shared process resources.

38. Try to compare processes and threads in terms of scheduling, concurrency, resource ownership and system overhead.

answer:

(1) Scheduling. Threads are the basic unit of scheduling and assignment in the OS, and processes are only the basic unit of resource ownership.

(2) Concurrency. Processes can execute concurrently, and multiple threads of a process can also execute concurrently.

(3) Possess resources. A process is always the basic unit that owns resources, and a thread only owns essential resources at runtime.

It owns no system resources at all, but can access resources belonging to the process.

(4) System overhead. The operating system pays significantly more overhead than threads when creating, destroying, and switching processes.

  1. In order to realize synchronization and communication between processes in a multi-threaded OS, what kinds of synchronization mechanisms are usually provided?

A: Synchronization allows multiple threads to execute concurrently by controlling program flow and accessing shared data. There are four synchronization models:

Mutexes, read-write locks, condition variables, and semaphores.

40.What is the difference between private and public semaphores for thread synchronization?

answer:

(1) Private semaphore. When a thread needs to use the semaphore to realize the synchronization between threads in the same process, it can call the create

Create a semaphore command to create a private semaphore whose data structure is stored in the address space of the application.

(2) Public semaphore. The public semaphore is set to achieve synchronization between different processes or between threads in different processes

of. Its data structure is stored in the protected system storage area, which is allocated and managed by the OS.

41.What are user-level threads and kernel-backed threads?

answer:

(1) User-level threads: Threads that exist only in user space and do not require kernel support. The creation, cancellation, and

Functions such as synchronization and communication between threads do not need to be implemented using system calls. User-level thread switching usually occurs in a

Between the many threads of the application process, there is also no need for kernel support.

(2) Kernel support thread: A thread that runs under the support of the kernel. Whether it is a thread in a user process or a system thread

The threads in the thread, its creation, cancellation and switching, etc. are all implemented in the kernel space by relying on the kernel. in kernel space

A thread control block is set for each kernel support thread, and the kernel perceives the existence of a certain thread and implements control according to the control block.

42.Try to explain the implementation of user-level threads.

Answer: User-level threads are implemented in user space and run in the "runtime system" and "kernel control threads"

on the system. A collection of functions used by the runtime system to manage and control threads. Kernel Controlled Threads or Lightweight Processes (LWPs)

The kernel can provide services through system calls, using the LWP process as an intermediate system.

43.Try to explain how the kernel supports threads.

Answer: When the system creates a new process, it allocates a task data area PTDA, which includes several thread control blocks TCB

space. Create a thread to allocate a TCB, write relevant information to the TCB, and allocate necessary resources for it. When PTDA

When the TCB in the process is used up and there are new threads in the process, as long as the number of created threads does not exceed the allowed value of the system, the system can

When a new TCB is allocated for it; when a thread is revoked, all resources and TCBs of the thread should also be recovered.

third chapter

Chapter 3 Processor Scheduling and Deadlock

1. What are the main tasks of high-level scheduling and low-level scheduling? Why introduce intermediate scheduling?

Answer: The main task of advanced scheduling is to transfer those jobs in the backup queue on the external storage into the internal memory according to a certain algorithm.

The low-level scheduling is to save the on-site information of the processor, take the process first according to a certain algorithm, and then assign the processor to the process.

The main purpose of introducing mid-level scheduling is to improve memory utilization and system throughput. Make those processes that cannot run temporarily no longer occupy memory resources, transfer them to external storage and wait, and change the process status to ready storage status or suspended status.

2. What are jobs, job steps, and job streams?

Answer: The homework includes the usual procedures and data, and is also equipped with a work instruction. The system controls the operation of the program according to the instructions. In the batch processing system, jobs are used as the basic unit to transfer from external storage to internal memory.

A job step refers to a number of relatively independent and interrelated sequential processing steps that each job must go through during its running.

Job flow refers to the input job flow formed by several jobs entering the system and storing them in the external storage in turn; under the control of the operating system, the job process is processed one by one, thus forming a processing job flow.

3. Under what circumstances do you need to use the job control block JCB? What's in it?

Answer: Whenever a job enters the system, the system creates a job control block JCB for each job, and inserts it into the corresponding backup queue according to the job type.

JCB usually includes: 1) Job ID 2) User name 3) User account 4) Job type (CPU busy type, I/O name type, batch type, terminal type) 5) Job status 6) Scheduling information (priority 7) Resource requirements 8) System entry time 9) Start processing time 10) Job completion time 11) Job exit time 12) Resource usage, etc.

4. How to determine how many jobs to accept and which jobs to accept in job scheduling?

Answer: The number of jobs that job scheduling accepts into memory each time depends on the degree of multiprogramming. Which jobs should be transferred from external storage to internal memory depends on the scheduling algorithm used. The simplest is the first-come-serve scheduling algorithm, and the more commonly used ones are the short-job-first scheduling algorithm and the scheduling algorithm based on job priority.

5. Try to explain the main functions of low-level scheduling.

Answer: (1) Save the on-site information of the processor (2) Select the process according to a certain algorithm (3) Assign the processor to the process.

6. In the preemptive scheduling method, what is the principle of preemption?

Answer: The principles of preemption include: time slice principle, priority principle, short job priority principle, etc.

7. What are the guidelines to follow when choosing a scheduling method and scheduling algorithm?

answer:

(1) User-oriented criteria: short turnaround time, fast response time, guaranteed deadlines, priority criteria.

(2) System-oriented criteria: high system throughput, good processor utilization, and balanced utilization of various resources.

8. Which process (job) scheduling algorithms are used in batch processing system, time-sharing system and real-time system?

Answer: The scheduling algorithm of the batch processing system: short job priority, priority, high response ratio priority, multi-level feedback queue scheduling algorithm.

Scheduling algorithm of time-sharing system: time slice round-robin method.

实时系统的调度算法:最早截止时间优先即EDF、最低松弛度优先即LLF算法。

9.何谓静态和动态优先级?确定静态优先级的依据是什么?

答:静态优先级是指在创建进程时确定且在进程的整个运行期间保持不变的优先级。

动态优先级是指在创建进程时赋予的优先权,可以随进程推进或随其等待时间增加而改变的优先级,可以获得更好的调度性能。

确定进程优先级的依据:进程类型、进程对资源的需求和用户要求。

10.试比较FCFS和SPF两种进程调度算法。

答:相同点:两种调度算法都可以用于作业调度和进程调度。

不同点:FCFS调度算法每次都从后备队列中选择一个或多个最先进入该队列的作业,将它们调入内存、分配资源、创建进程、插入到就绪队列。该算法有利于长作业/进程,不利于短作业/进程。SPF算法每次调度都从后备队列中选择一个或若干个估计运行时间最短的作业,调入内存中运行。该算法有利于短作业/进程,不利于长作业/进程。

11.在时间片轮转法中,应如何确定时间片的大小?

答:时间片应略大于一次典型的交互需要的时间。一般应考虑三个因素:系统对相应时间的

要求、就绪队列中进程的数目和系统的处理能力。

12.通过一个例子来说明通常的优先级调度算法不能适用于实时系统?

答:实时系统的调度算法很多,主要是基于任务的开始截止时间和任务紧急/松弛程度的任务优先级调度算法,通常的优先级调度算法不能满足实时系统的调度实时性要求而不适用。

13.为什么说多级反馈队列调度算法能较好地满足各方面用户的需求?

答:(1)终端型作业用户提交的作业大多属于较小的交互型作业,系统只要使这些作业在第一队列规定的时间片内完成,终端作业用户就会感到满足。

(2)短批处理作业用户,开始时像终端型作业一样,如果在第一队列中执行一个时间片段即可完成,便可获得与终端作业一样的响应时间。对于稍长作业,通常只需在第二和第三队列各执行一时间片即可完成,其周转时间仍然较短。

(3) Long batch jobs, which will run in the 1st, 2nd, ..., n queues in turn, and then run in a round-robin manner, users don't have to worry about their jobs not being processed for a long time. Therefore, the multi-level feedback queue scheduling algorithm can meet the needs of multiple users.

14. Why in a real-time system, the system (especially the CPU) is required to have strong processing power?

Answer: There are usually multiple real-time tasks in a real-time system. If the processing capability of the processor is not strong enough, some real-time tasks may not be processed in time because the processor is too busy, resulting in unpredictable consequences.

15. According to the scheduling method, what kinds of real-time scheduling algorithms can be divided into?

Answer: It can be divided into non-preemptive and preemptive algorithms. The non-preemptive algorithm is further divided into non-preemptive round-robin and priority scheduling algorithms; the preemptive scheduling algorithm is further divided into preemptive priority based on clock interruption and immediate preemptive priority scheduling algorithm.

16. What is the earliest deadline first scheduling algorithm? for example.

Answer: The task priority scheduling algorithm determined according to the start deadline of the task. The earlier the deadline, the higher the priority. The algorithm requires to maintain a real-time task ready queue in the system, and the queue is sorted according to the deadline of each task.

Example: The non-preemptive scheduling method is used for aperiodic real-time tasks. Figure 3-9 is an example of using this algorithm in a non-preemptive scheduling mode. In this example there are four aperiodic tasks which arrive one after the other. The system first schedules the execution of task 1, and during the execution of task 1, tasks 2 and 3 arrive successively. Since the start and deadline of task 3 is earlier than that of task 2, the system will schedule task 3 to execute after task 1. During this period, job 4 is reached again, and its start deadline is still earlier than task 2. Therefore, after task 3 is executed, the system schedules task 4 to execute, and finally schedules task 2 to execute.

Figure 3-9 Scheduling method of EDF algorithm for non-preemptive scheduling

17. What is the least-slack-first scheduling algorithm? Illustrate it with an example.

Answer: The algorithm determines the priority of the task according to the degree of urgency (or relaxation) of the task. The higher the urgency of the task,

The higher the priority assigned to the task, so that it is executed first. For example, a task must complete in 200 ms

is completed, and its own running time is 100 ms, therefore, the scheduler must schedule execution before 100 ms,

The urgency (slack) of this task is 100 ms. Another example, another task must complete in 400 ms, itself

It needs to run for 150 ms, so it has a slack of 250 ms.

18. What is a deadlock? What are the causes and necessary conditions for deadlock?

Answer: Deadlock refers to a deadlock caused by multiple processes competing for resources during operation. When the process is in this stalemate

state, if there is no external force, they will no longer be able to move forward.

The deadlock is caused by competition for resources and an illegal progress sequence between processes. Its necessary conditions are: mutual exclusion condition, request and

Hold condition, no preempt condition, loop wait condition.

19. Among the several methods to solve the deadlock problem, which method is the easiest to implement? Which approach makes the most resource utilization?

Answer: Among the four methods to solve deadlocks, namely prevention, avoidance, detection and release of deadlocks, deadlock prevention is the easiest to achieve;

Avoiding deadlocks maximizes resource utilization.

20. Please elaborate on the means by which deadlocks can be prevented.

Answer: (1) Abandon the "request and hold" condition, that is, if the system has enough resources, all the resources needed by the process will be allocated at one time.

has resources allocated to it;

(2) Abandon the "no deprivation" condition, that is, a process that already owns resources, when it makes a new resource request and cannot immediately

When satisfied, all resources it has held must be released and re-applied when needed in the future;

(3) Abandon the "loop waiting" condition, that is, sort all resources by type and label, and all processes' requests for resources

It must be submitted strictly in the order of increasing serial number.

twenty one. In the example of the banker's algorithm, if P0 sends a request vector from Request(0,2,0) to Request(0,1,0),

Q Can the system allocate resources to it? (This answer is a bit questionable and needs to be reconsidered)

Answer: (1) Yes. The number of various resources in the banker's algorithm is 10, 5, and 7 respectively, and the resource allocation at T0 is shown in the figure:

(2) The specific analysis is as follows:

① Requst0(0,1,0)<=Need0(7,4,3);

②Requst0(0,1,0)<=Available(2,3,0);

The system first assumes that resources can be allocated to P0, and modifies the Available0, Allocation0 and Need0 vectors, thus forming

The resource changes of are shown in the figure below:

(3) P0 requests resources: P0 sends a request vector Requst0(0,1,0), and the system checks according to the banker's algorithm:

① Requst0(0,1,0)<=Need0(7,4,3);

②Requst0(0,1,0)<=Available(2,3,0);

③ Temporarily assume that the system can allocate resources to P0, and modify the relevant data of ______________, as shown in the figure below

To sum up the system can allocate resources to it.

twenty two. The following resource allocation occurs in the banker's algorithm. How to ask (1) Is this state safe? (2) If the process P2 proposes

After Request(1,2,2,2), can the system allocate resources to it?

Ask: (1) Is this state safe?

(2) If process P2 makes a request (1,2,2,2), can the system allocate resources to it? (The reference answer is wrong)

Answer: (1) Safe, because there is a safe sequence {P0,P3,P4,P1,P2}

(2) The system can allocate resources, the analysis is as follows.

① Request(1,2,2,2) <=Need2(2,3,5,6);

② Request(1,2,2,2) <=Available2(1,3,5,4)改成Available2(1,6,2,2);

③The system first assumes that resources can be allocated to P2, and modifies the Available2, Allocation2 and Need2 vectors,

The resulting changes in resources are shown in the figure below:

④ Use the security algorithm to check whether the system is safe at this time. As shown below

From this security check it follows that a secure sequence {P2,P0,P1,P3,P4} can be found.

Chapter Four

1. Why configure hierarchical storage?

Answer: Setting multiple memories can make the hardware at both ends of the memory work in parallel; using a multi-level storage system, especially

Cache technology is the best structural solution to reduce the impact of memory bandwidth on system performance;

A buffer memory to reduce the pressure on memory access. Increasing the number of registers in the CPU greatly relieves the pressure on the memory.

2. In what ways can a program be loaded into memory? What occasions are they suitable for?

Answer: (1) The absolute loading method is only applicable to the single program environment.

(2) Relocatable loading method, suitable for multi-programming environment.

(3) Loading mode during dynamic runtime, used in multi-programming environment; it is not allowed to move the location in the memory when the program is running.

3. What is static linking? What is load-time dynamic linking and runtime dynamic linking? P120

Answer: Static linking refers to linking each target module and its required library functions into a complete program before running the program.

Assembling the module, the connection method that will not be disassembled in the future.

Load-time dynamic linking refers to a group of target modules obtained after compiling the user source program, which is loaded at the time of loading into the memory.

The link mode of the inbound link.

Runtime dynamic linking refers to the linking of certain target modules, which is only valid when the target module is required during program execution.

It carries the link.

4. What work should be done during program linking?

Answer: Linker will link a set of target modules after compilation and the library functions they need to link together.

形成一个完整的装入模块Load Module。主要工作是修改程序内的相对地址和修改目标程

序中的外部调用标号。

5.在动态分区分配方式中,应如何将各空闲分区链接成空闲分区链?

答:在每个分区的起始部分,设置一些控制分区分配的信息,以及用于链接各分区所用的前

向指针;在分区尾部设置一个后向指针,通过前后向链接指针,将所有空闲分区链成一个双

向链。当分区分配出去后,把状态位由“0”改为“1”。

6.为什么要引入动态重定位?如何实现?

答:在程序执行过程中,每当访问指令或数据时,将要访问的程序或数据的逻辑地址转换成

物理地址,引入了动态重定位;

具体实现方法是在系统中增加一个重定位寄存器,用来装入程序在内存中的起始地址,

程序执行时,真正访问的内存地址是相对地址与重定位寄存器中的地址相加之和,从而实现

动态重定位。

7.在采用首次适应算法回收内存时,可能出现哪几种情况?应怎样处理这些情况?

答:在采用首次适应算法回收内存时可能出现4种情况:

(1)回收区前邻空闲区。将回收区与前邻空闲区合并,将前邻空闲区大小修改为两者之和。

(2)回收区后邻空闲区。将两区合并,改后邻空闲区始址为回收区始址,大小为两者之和。

(3)回收区前后均邻空闲区。将三个分区合并,修改前邻空闲区大小为三者之和。

(4)回收区前后均不邻空闲区。为回收区设置空闲区表项,填入回收区始址和大小并插入

空闲区队列。

8.令 表示大小为 、地址为x 的块的伙伴系统地址,试写出的通用表达式。

答:当 时, ;当 时,

9.分区存储管理中常用那些分配策略?比较它们的优缺点。

答:分区存储管理中的常用分配策略:首次适应算法、循环首次适应算法、最佳适应算法、最坏适应算法。

首次适应算法优缺点:保留了高址部分的大空闲区,有利于后来的大型作业分配;低址部分不断被划分,留下许多难以利用的小空闲区,每次查找都从低址开始增加了系统开销。

循环首次适应算法优缺点:内存空闲分区分布均匀,减少了查找系统开销;缺乏大空闲分区,导致不能装入大型作业。

Advantages and disadvantages of the best-fit algorithm: each time a file is allocated to the partition that is most suitable for the size of the file, leaving many small free areas in the memory that are difficult to use.

The advantages and disadvantages of the worst-fit algorithm: the remaining free area is not too small, and the probability of fragmentation is small, which is beneficial to the allocation and partition operation of small and medium files; the lack of large free area in the memory is not good for the partition allocation of large files.

10. What are the benefits of introducing swaps into the system?

Answer: The swap technology moves temporarily unneeded jobs to the external storage, freeing up memory space to load other jobs, and swapping jobs to the external storage

Industry can also be transferred again. The purpose is to solve the problem of memory shortage, and the benefits are to further improve memory utilization and

System throughput.

11. In order to realize the exchange, what functions should the system have?

Answer: The system should have three functions: swap space management, process swap out, and process swap in.

12. When swapping in units of processes, is the entire process swapped out each time? Why?

Answer: When swapping in units of processes, the entire process is not swapped out every time. This is because:

(1) Structurally speaking, a process is composed of a program segment, a data segment and a process control block, wherein some or all of the process control block always resides in memory and cannot be swapped out.

(2) The program segment and data segment may be shared by several processes, and they cannot be swapped out at this time.

13. What hardware support is required to implement paging memory management?

Answer: Dynamic relocation technology, virtual storage technology, multiprogramming technology.

14. A more detailed description of the introduction of segmented storage management is to meet the needs of users.

answer:

  1. Easy programming. Users usually divide their jobs into several sections according to the logical relationship, each section is addressed from 0, and

Has its own name and length. Therefore, the logical address to be accessed is determined by the segment name and the offset within the segment.

  1. Information Sharing. When sharing programs and data, it is based on logical units of information. pages in a paging system

It is a physical unit for storing information, which has no complete meaning and is not easy to share; a segment is a logical unit of information. In order to realize the segment

It is hoped that storage management can be adapted to the organization of user program segments.

  1. Information Protection. The logical unit of information is protected, and the segmentation can realize the information protection function more effectively and conveniently.

  2. Dynamic growth. In practical applications, some segments, especially data segments, will continue to grow during use, and there is no prior

There is no way to know exactly how much it has grown. Segmented storage management can better solve this problem.

  1. dynamic link. When running, first load the target program corresponding to the main program into the memory and start running.

When a certain segment is called, the segment is loaded into the memory link. Therefore, dynamic linking also requires a segment as a management unit.

15. How to implement address translation in the segmented page storage management mode with fast table?

Answer: After the effective address is given by the CPU, the address conversion mechanism automatically sends the page number P into the cache register and transfers this

The page number is compared with all page numbers in the cache, and if a matching page number is found, it means that the page table entry to be accessed is in the fast table. can be straight

Then read out the corresponding physical block number of the page from the fast table, and send it to the physical address register. If there is no corresponding page table entry in the fast table, then

Access the memory page table, and after finding it, send the physical block number read from the page table entry to the address register; at the same time, modify the fast table to save this page

Table entries are stored in the fast table. But if the register is full, the OS must find a suitable page table entry to swap out.

16. Why do you say that segmented systems are easier to share and protect information than paging systems?

Answer: Each page of the paging system is stored separately. In order to realize information sharing and protection, there needs to be one-to-one correspondence between pages.

To this end, a large number of page table entries need to be established; while each segment of the segmented system is addressed from 0 and uses a continuous address space

When implementing sharing and protection, you only need to set a segment table entry for the program to be shared and protected, and set the base address and internal

One-to-one correspondence of storage addresses can be realized.

17. What is the difference between segmented and paged storage management?

answer:

(1) A page is a physical unit of information, and paging is used to implement discrete allocation to reduce external fractions of memory and improve memory utilization. A segment is a logical unit of information, which contains a relatively complete set of information.

(2) The size of the page is fixed and determined by the system. The system divides the logical address into two parts, the page number and the address in the page.

It is implemented by mechanical hardware, so there can only be one size of page in the system; while the length of the segment is not fixed, it depends on the user

The written program is usually divided according to the nature of the information when the compiler compiles the original program.

(3) The paged job address space is one-dimensional, while the segmented job address space is two-dimensional.

18. Try to make a comprehensive comparison between continuous allocation and discrete allocation.

answer:

(1) Continuous allocation refers to the allocation of a continuous address space for a user program, including single and partition allocation methods

Mode. Divide memory into system area and user area in a single way, which is the simplest and only used for single-user single-task operating system;

There are fixed and dynamic partitions.

(2) Discrete allocation methods are divided into paging, segmentation and segment page storage management. Paged memory management designed to improve memory utilization

The segmented storage management is designed to meet the needs of users (programmers), and the segmented page storage management combines the two, with

It has the advantages of easy implementation, shareability, easy protection and dynamic linking of the segmentation system, and it can solve external problems like a paging system.

Fragmentation and discrete allocation of memory for each segment are more effective storage management methods;

19. What are the characteristics of virtual memory? What is the most essential feature of it?

Answer: Virtual storage has three characteristics: multiplicity, interchangeability, and virtuality. The most essential feature is virtuality.

20. What hardware support is required to implement virtual memory?

Answer: (1) page (segment) table mechanism requesting paging (segment) (2) page (segment) interrupt mechanism (3) address conversion mechanism

twenty one. Which key technologies are needed to realize virtual memory?

answer:

(1) In the paging request system, on the basis of paging, the request paging function and page replacement function are added to form

Paging virtual storage system. Allows programs (and data) that load only a few pages to start running.

(2) In the request segmentation system, it is formed after adding the functions of request adjustment and segment replacement on the basis of the segment system

Segmented virtual storage system. Allows user programs and data to be loaded from only a few segments (rather than all segments) and up and running.

twenty two. In a demand paging system, what data items should a page table contain? What is the role of each item?

Answer: The page table should include: page number, physical block number, status bit P, access field A, modification bit M and external storage address.

Among them, the status bit P indicates whether the page is loaded into the memory for reference when the program accesses; the access field A is used to record the page in a

The number of visits within a certain period of time, or how long it has not been visited recently, is provided to the replacement algorithm when choosing to swap out pages.

test; the modification bit M indicates whether the page has been modified after being transferred into the memory; the external memory address is used to point out the address of the page on the external memory

The address, usually the physical block number, is used when loading the page.

twenty three. In a demand paging system, where should the required pages be brought into memory?

Answer: There are three situations where the page fault in the request paging system is loaded into the memory:

(1) When the system has enough space in the swap area, all required pages can be transferred from the swap area to increase the paging speed. Copy the files related to the process from the file area to the swap area before the process runs.

(2) When the system lacks enough space in the swap area, the unmodified files are directly imported from the file area; enter. For those that may be modified, they are transferred to the swap area when they are swapped out, and then transferred in from the swap area when needed later.

(3) UNIX mode. Pages that are not running are loaded from the file area. It has been running but was swapped out, and it will be transferred from the swap area next time. The UNIX system allows page sharing, and the page requested by a process may have been transferred into memory, and it will not be transferred for direct use. twenty four. Which page replacement algorithms are commonly used in demand paging systems?

Answer: The page replacement algorithms used include: best replacement algorithm and first-in-first-out replacement algorithm, least recently used (LRU) replacement algorithm, Clock replacement algorithm, least used replacement algorithm, page buffer algorithm, etc.

25.In a demand paging system, which page allocation method is usually used? Why?

Answer: The fixed allocation method is based on the type of process (interactive type) or according to the suggestions of programmers and system administrators, and allocates a fixed number of pages of memory space for each process, which will not change during the entire operation; the variable allocation method has There are two kinds of global replacement and local replacement, the former is easy to implement, and the latter is efficient.

26.In a request paging system, when the LRU page replacement algorithm is used, if the page direction of a job is 4, 3, 2, 1, 4, 3, 5, 4, 3, 2, 1, 5, when assigned to the job When the number of physical blocks M is 3 and 4 respectively, try to calculate the number of page faults and the page fault rate during the access process? Compare the results? (The reference answer is wrong)

Answer: When the number of physical blocks M allocated to the job is 3, there are 7 page faults, and the page fault rate: 7/12=0.583;

When the number of physical blocks M allocated to the job is 4, there are 4 page faults, and the page fault rate: 4/12=0.333.

-------The above answer is wrong. See below for the correct solution:

Answer: When the number of physical blocks M allocated to the job is 3, there are 9 page faults, and the page fault rate: 9/12=3/4;

When the number of physical blocks M allocated to the job is 4, there are 10 page faults, and the page fault rate: 10/12=5/6.

27.What is the hardware support required to implement the LRU algorithm?

Answer: Hardware support such as registers and stacks is required. The register is used to record the usage of each page in the memory of a process, and the stack is used to

Stores the page number of each page currently in use.

28.Try to explain the basic principle of the improved Clock replacement algorithm.

Answer: Because the cost of swapping out a modified page is greater than that of an unmodified page, in the improved Clock algorithm

In this process, not only the usage of the page is considered, but also the factor of replacement cost is added; when selecting a page as the eliminated page, the same

When meeting unused and unmodified as the preferred elimination page.

29.Describe the page fault interrupt handling process in a request segmentation system.

Answer: The page fault interrupt processing process in the request segmentation system is described as follows:

(1) Look up the page table according to the logical address in the currently executing instruction to determine whether the page is in the main memory

(2) The page mark is "0" to form a page fault interrupt, and the interrupt device allows the interrupt handler of the operating system to occupy the processor by exchanging PSW.

(3) The method for the operating system to handle page faults is to check the main memory allocation table to find a free main memory block, check the page table to find out the location of the page on the disk, and start the disk to read the page information.

(4) Load the information read from the disk into the found main memory block.

(5) After the page address is loaded into the main memory, the corresponding entry in the page table should be modified, fill in the main memory block occupied by the page and set the flag to "1", indicating that the page is already in the main memory

(6) Since the instruction when the page fault interrupt occurs has not been executed, the interrupted instruction should be re-executed after the page is loaded.

The page fault interrupt processing process in the request segmentation system is shown in the following figure:

30.How to achieve segment sharing?

Answer: In the segment table of each process, use the corresponding entry to point to the starting address of the shared segment in memory; configure the corresponding data structure as the shared segment table, set the shared process count Count in the segment table item, and call it once For the shared segment, the Count value increases by 1, and whenever a process releases a shared segment, the Count decreases by 1, and if it decreases to 0, the system reclaims the physical memory of the shared segment, and cancels the entry corresponding to the segment in the shared segment table; The shared segment should give different access permissions to different processes; different processes can use different segment numbers to share the segment.

chapter Five

1. Explain the composition of the device controller.

Answer: It consists of the interface between the device controller and the processor, the interface between the device controller and the device, and I/O logic.

2. In order to realize the communication between the CPU and the device controller, what functions should the device controller have?

A: Receiving and identifying commands; data exchange; identifying and reporting device status; address identification; data buffering; error control.

3. What is a byte-multiplexed channel? What are array select channels and array multiplex channels?

Answer: (1) byte multiplex channel. Channels that work in byte-interleaved fashion. Usually contains many non-distributive subchannels, several

The quantity is from tens to hundreds, and each sub-channel is connected to an I/O device to control its I/O operation. Subchannel by time slice wheel

transfer mode to share the main channel.

(2) Array selection channel. The data is transmitted in an array, the transmission rate is very high, and only one device data is allowed at a time.

(3) Array multi-channel. It is formed by combining the advantages of high transmission rate of the array selection channel and the time-sharing parallel operation of each sub-channel of the byte multi-channel channel. Contains multiple non-distributed sub-channels with high data transfer rate and channel utilization.

4. How to solve the bottleneck problem caused by insufficient channels?

Answer: The effective way to solve the problem is to increase the path between the device and the host without adding channels, and connect one device to multiple controllers, and the controllers are connected to multiple channels. This multi-channel method solves the "bottleneck" ” problem, improving system reliability, failure of individual channels or controllers will not cause a path between the device and the memory.

5. Try to compare VESA and PCI buses.

Answer: The design idea of ​​VESA bus is to occupy the market at a low price. The bus bandwidth is 32 bits, and the maximum transfer rate is 132Mb/s.

Widely used in 486 microcomputers. The disadvantage is that the number of devices that can be connected is only 2~4, and there is no buffer in the controller, so it is difficult to adapt to the processing

Improvement of server speed does not support Pentium machines.

The PCI bus inserts a complex management layer between the CPU and peripherals, coordinating data transfers and providing a consistent interface. manage

The layer is equipped with a data buffer, which amplifies the driving ability of the line, supports up to 10 kinds of peripherals, and supports CPUs with high clock frequencies

Running, the maximum transfer rate is 132Mb/s. It can be connected to traditional buses such as ISA and EISA, and supports Pentium's 64

The bit system is a bus developed based on a new generation of microprocessors such as Pentium.

6. What are the main factors driving the development of I/O control?

Answer: The main driving force to promote the development of I/O control is to minimize the intervention of the host on I/O control, free the host from the complicated I/O control affairs, and spend more time and energy to complete its data processing tasks . At the same time, the introduction of the interrupt mechanism in the computer system, the emergence of the DMA controller and the successful development of the channel make the development of I/O control have technical support and become possible.

7. What kinds of I/O control methods are there? What kind of occasions are they suitable for?

Answer: There are four I/O control methods.

(1) Program I/O mode: Early computers had no interrupt mechanism, and the processor used program I/O mode or busy mode to control I/O devices.

(2) Interrupt-driven I/O control mode: suitable for computer systems with interrupt mechanisms.

(3) Direct memory access (DMA) I/O control mode: suitable for computer systems with DMA controllers.

(4) I/O channel control mode: in computer systems with channel programs.

8. Try to explain the workflow of DMA.

Answer: Take reading data from disk as an example to illustrate the workflow of DMA. When the CPU wants to read a data block from the disk, it first sends a read command to the disk controller. This command is sent to the command register CR. At the same time, it also sends the memory start target address of the data to be read this time, and sends it to the memory address register MAR; the number of bytes of the data to be read this time is sent to the data counter DC, and the source address in the disk is directly sent to the DMA controller I/O control logic. Then start the DMA controller to transfer data, and then the CPU will handle other tasks. The entire data transfer process is controlled by the DMA controller. The figure below is the working flow chart of DMA mode.

9. What is the main reason for introducing buffering?

Answer: The main reasons for introducing buffering are:

(1) Alleviate the contradiction between the speed mismatch between the CPU and the I/O device

(2) Reduce the interrupt frequency of the CPU and relax the restriction on the interrupt response time

(3) Improve the parallelism between CPU and I/O devices

10. In the case of single buffering, why does the system take max(C,T)+M to process a piece of data?

Answer: When inputting a block device, first input a piece of data from the disk into the buffer, which takes time T; then the operating system sends the buffer data to the user area, which takes M; then the CPU calculates the block data , time-consuming C. In the case of single buffering, the operation of the disk inputting data into the buffer and the calculation process of the CPU on the data can be carried out in parallel, so the processing time of the system for each entire block of data is max(C, T) + M.

11. Why in the case of double buffering, the processing time of the system for a piece of data is max(T,C)?

Answer: The writer spends time T to fill one buffer with data and then writes another buffer; the reader spends time M to send the data in one buffer to the user area and then transmit the data in another buffer, and the calculator reads Out of the user area data for processing. Since the operation of transferring data from the buffer to the user area must be performed serially with the processing of reading user area data, it can be paralleled with the operation of transferring data from the external memory to fill the buffer. Therefore, the time-consuming is about max(C+M,T). Considering that M is the movement time of the memory data block is very short and can be omitted, so it is approximately considered that the system processes a piece of data as max(C,T).

12. Try to draw a picture to illustrate the situation when using multibuffering for output.

Answer: The schematic diagram of multi-buffering for output is as follows:

13. Try to explain the working conditions of receiving input working buffer and extracting output working buffer.

answer:

①收容输入工作缓冲区的工作情况为:在输入进程需要输入数据时,调用GetBuf(EmptyQueue)过程,从EmptyQueue队列的队首摘下一个空缓冲区,作为收容输入工作缓冲区Hin。然后把数据输入其中,装满后再调用PutBuf(InputQueue,Hin)过程,将该缓冲区挂在输入队列InputQueue的队尾。

②提取输出工作缓冲区的工作情况为:当要输出数据时,调用GetBuf(OutputQueue)过程,从输出队列的队首取得一装满输出数据的缓冲区作为提取输出工作缓冲区Sout。在数据提取完后,再调用PutBuf(EmptyQueue,Sout)过程,将该缓冲区挂到空缓冲队列EmptyQueue的队尾。

14.何谓安全分配方式和不安全分配方式?

答:

① 安全分配方式是指每当进程发出I/O 请求后,便进入阻塞状态,直到其I/O操作完成时才被唤醒。在采用这种分配策略时,一旦进程已获得某种设备资源后便阻塞,使它不可能再请求任何资源,而在它运行时又不保持任何资源。这种分配方式已经摒弃了造成死锁的“请求和保持”条件,分配是安全的。缺点是进程进展缓慢,CPU与I/O设备串行工作。

②不安全分配方式是指进程发出I/O 请求后仍继续执行,需要时又可发出第二个I/O 请求、第三个I/O请求。仅当进程请求的设备已被另一个进程占有时,进程才进入阻塞状态。优点是一个进程可同时操作多个设备,进程推进迅速。缺点是分配不安全,可能具有“请求和保持”条件,可能造成死锁。因此,在设备分配程序中需增加一个功能,用于对本次的设备分配是否会发生死锁进行安全性计算,仅当计算结果表明分配安全的情况下才进行分配。

15.为何要引入设备独立性?如何实现设备独立性?

答:现代操作系统为了提高系统的可适应性和可扩展性,都实现了设备独立性或设备无关性。基本含义是应用程序独立于具体使用的物理设备,应用程序以逻辑设备名请求使用某类设备。实现了设备独立性功能可带来两方面的好处:(1)设备分配时的灵活性;(2)易于实现I/O重定向。

In order to realize the independence of equipment, logical equipment and physical equipment concepts should be introduced. In the application program, use the logical device name to request the use of a certain type of device; when the system executes, it uses the physical device name. In view of the fact that the driver is software closely related to hardware or devices, a layer of device independence software must be set above the driver to perform common operations of all devices and complete the conversion from logical device names to physical device names (for this purpose, a logical device table) and provide a unified interface to the user layer (or file layer) software, thereby achieving device independence.

16. How should exclusive devices be allocated when considering device independence?

Answer: When considering the independence of the equipment, the exclusive equipment should be allocated according to the following steps:

(1) The process makes an I/O request with a logical device name.

(2) Obtain the pointer in the system device table corresponding to the physical device of the logical device requested by the I/O according to the logical device table.

(3) Retrieve the system equipment table, find out the equipment control table that belongs to the request type, is free and available, and allocates safety equipment, and will correspond to

The device is allocated to the requesting process; if not found, it waits for wakeup and allocation.

(4) Find the controller control table of the controller connected to it in the device control table, and judge whether it is busy according to the status field

If busy, wait; otherwise assign the controller to the process.

(5) Go to the controller control table of the controller to find the channel control table of the channel connected to it, and judge whether the channel is busy

If busy, wait; otherwise assign the channel to the process.

(6) Only when all three of the device, controller and channel are assigned successfully, the device assignment is considered successful, and then you can start

mobile device for data transfer.

17. What is device virtualization? What are the key technologies that are relied upon to realize device virtualization?

Answer: Device virtualization refers to transforming an exclusive device into a virtual device through some technical processing.

A virtual device means that a physical device can become multiple logical virtual devices after adopting virtualization technology. A virtual device is a shareable device that can be allocated to multiple processes at the same time, and these access The sequencing of the physical devices is controlled.

18. Try to explain the composition of the SPOOLing system.

Answer: The SPOOLing system consists of three parts: input well and output well, input buffer and output buffer, input process SPi and output process SPo.

19. When implementing background printing, what services should the SPOOLing system provide to processes requesting I/O?

Answer: When implementing background printing, the SPOOLing system should provide the following services for processes requesting I/O:

(1) The output process applies for a free disk area in the output well, and sends the data to be printed into it;

(2) The output process applies for a blank user print form for the user process, fills in the printing requirements, and hangs the form to the request print queue.

(3) Once the printer is free, the output process will take out a request print form from the head of the request print queue, and transfer the data to be printed from the output well to the memory buffer according to the requirements in the form, and then print it by the printer.

20. Describe the characteristics of device drivers.

Answer: The device driver has the following characteristics:

(1) It is a communication program between the request I/O process and the device controller;

(2) The driver is closely related to the characteristics of the I/O device;

(3) The driver is closely related to the I/O control method;

(4) The driver program is closely related to the hardware, some programs are written in assembly language, and the basic part is often solidified in ROM.

twenty one. Try to explain what functions the device driver should have?

Answer: The main functions of a device driver include:

(1) Transform the received abstract requirements into specific requirements;

(2) Check the legitimacy of the user's I/O request, understand the status of the I/O device, pass relevant parameters, and set the working mode of the device;

(3) Issue the I/O command, start the assigned I/O device, and complete the specified I/O operation;

(4) Timely respond to the interrupt request sent by the controller or channel, and call the corresponding interrupt handler according to the interrupt type;

(5) For computers with channels, the driver program should also automatically form channel programs according to user I/O requests.

22. What tasks do device interrupt handlers usually need to complete?

Answer: The device interrupt handler usually needs to complete the following tasks:

(1) Wake up the blocked driver process;

(2) Protect the CPU environment of the interrupted process;

(3) Analyze the cause of the interruption and transfer to the corresponding equipment interruption processing program;

(4) Interrupt processing;

(5) Resume the interrupted process.

23. What are the components of disk access time? How should each part time be calculated?

Answer: Disk access time consists of three parts: seek time Ts, rotational delay time Tr, and transfer time Tt.

(1) Ts is the sum of the start time s of the magnetic arm and the time for the head to move n tracks, that is, Ts = m × n + s.

(2) Tr is the time it takes for the specified sector to move below the head. When the hard disk is 15000r/min, Tr is 2ms; when the floppy disk is 300 or 600r/min, Tr is 50~100ms.

(3) Tt refers to the time it takes for data to be read from or written to the disk. The size of Tt is related to the number of bytes b read/written each time and the rotation speed: Tt = b/rN.

24. What are the commonly used disk scheduling algorithms? What are the priorities for each algorithm?

Answer: Currently commonly used disk scheduling algorithms include first-come-first-served, shortest seek time first, and scan algorithms.

(1) The first-come-first-serve algorithm gives priority to the order in which processes request access to disks;

(2) The shortest seek time priority algorithm gives priority to whether the track that requires access is the closest to the track where the current head is located;

(3) The scanning algorithm considers the distance between the track to be accessed and the current track, and gives priority to the current moving direction of the head.

25. Why introduce disk cache? What is disk caching?

Answer: At present, the I/O speed of the disk is much lower than the access speed of the memory, usually 4-6 orders of magnitude lower. As a result, disk I/O has become the bottleneck of computer systems. In order to improve the speed of disk I/O, disk cache is introduced.

Disk cache refers to using the storage space in the memory to temporarily store information in a series of disk blocks read from the disk.

26. When designing disk cache, how to achieve data delivery?

Answer: Data delivery refers to passing data from the disk cache to the requesting process. When a process requests to access data in a certain disk block, the core first checks the disk cache to see if there is a copy of the required disk block data. If there is, it will directly extract the data from it and deliver it to the requesting process, avoiding the disk access operation, and this time the access speed will increase by 4-6 orders of magnitude; otherwise, first read the data to be accessed from the disk and deliver it to the requester process. Cached for direct read next time.

27.What are read ahead, write delayed and virtual disks?

Answer: Read ahead refers to reading the data of the next disk block that may be accessed into the buffer while reading the current disk block, so that

Read directly from the buffer when needed, without booting the disk.

Delayed writing means that when writing a disk block, the immediate write data in the corresponding buffer will not be written immediately in case it will be accessed soon, but only set the "delayed write" flag and hang it at the end of the idle buffer queue. The data in the buffer is only written when it is moved to the head of the free buffer and allocated. As long as the deferred write block is still in the free buffer queue, any request to access it can directly read data from it or write data to it without having to go to disk.

A virtual disk, also known as a RAM disk, emulates a disk using memory space. Its device driver can accept all standard disk operations, but these operations are not on the disk but in memory, so it is faster.

28.廉价磁盘冗余阵列是如何提高对磁盘的访问速度和可靠性的?

答:廉价磁盘冗余阵列RAID是利用一台磁盘阵列控制器,统一管理和控制一组(几台到几

十台)磁盘驱动器,组成高度可靠快速大容量的磁盘系统。

操作系统将RAID中的一组物理磁盘驱动器看作一个单个的逻辑磁盘驱动器。用户数据和系统数据可分布在阵列的所有磁盘中,并采取并行传输方式,大大减少数据传输时间和提高了可靠性。

第六章

1.何谓数据项、记录和文件?

答:①数据项分为基本数据项和组合数据项。基本数据项描述一个对象某种属性的字符集,具有数据名、数据类型及数据值三个特性。组合数据项由若干数据项构成。

②记录是一组相关数据项的集合,用于描述一个对象某方面的属性。

③文件是具有文件名的一组相关信息的集合。

2.文件系统的模型可分为三层,试说明其每一层所包含的基本内容。

答:第一层:对象及其属性说明(文件、目录、硬盘或磁带存储空间);

第二层:对对象操纵和管理的软件集合(I/O控制层即设备驱动程序、基本文件系统即物理I/O层、基本I/O管理程序或文件组织模块层、逻辑文件系统层)

第三层:文件系统接口(命令接口/图形化用户接口与程序接口)。

3.试说明用户可以对文件施加的主要操作有哪些?

答:用户通过文件系统提供的系统调用对文件实施操作。

(1)基本文件操作:创建、删除、读、写、截断、设置读/写位置等;

(2)文件打开和关闭操作:第一步通过检索文件目录找到指定文件属性及其在外存上位置;第二步对文件实施读写等相应操作。

(3)其他文件操作:一是文件属性操作;二是目录操作;三是文件共享与文件系统操作的系统调用实现等。

4.何谓逻辑文件?何谓物理文件?

答:逻辑文件是物理文件中存储的数据的一种视图方式,不包含具体数据,仅包含物理文件中数据的索引。物理文件又称文件存储结构,是指文件在外存上的存储组织形式。

5.如何提高对变长记录顺序文件的检索速度?

Answer: The basic method is to create an index table for variable-length record sequence files, and use the length of each record in the main file and the pointer to the corresponding record (that is, the first address of the record in the logical address space) as the content of the corresponding table item . Since the index table itself is a sequential file with fixed-length records, if it is sorted by the record key, the convenient and quick direct access to the main file is realized. If the file is large, the search efficiency should be further improved by establishing a grouped multi-level index.

6. Explain how to retrieve index files and index sequential files.

Answer: ① To retrieve the index file, firstly, according to the keyword provided by the user (program), use the binary search method to search the index table to find the corresponding entry; then use the given record pointer value to access the corresponding record.

②Retrieval of the index sequence file, first use the keyword and search method provided by the user (program) to search the index table, find the first record entry of the record in the record group, and obtain the first record in the main file location; then use the sequential search method to search the master file and find the required records.

7. Try to compare indexed files and indexed sequential files in terms of retrieval speed and storage cost.

Answer: The main file of the index file is configured with an index item for each record, the storage cost is N, the records with the specified keywords are retrieved, and N/2 records are searched on average. For index sequential files, each record group is configured with an index item, and the storage cost is N. To retrieve a record with a specified keyword, it needs to search N/2 times on average.

8. Explain the structure and advantages of sequential files.

Answer: The first is the string structure: the sequence between the records has nothing to do with the keywords. The second is the sequential structure: it means that all records in the file are arranged according to keywords (words). Can be sorted by keyword length or English alphabetical order.

The best application of sequential files is when batch access to records is performed, and the access efficiency is the highest; only sequential files can be stored on tape and work effectively.

9. Which linking method is commonly used in linked files? Why?

Answer: There are two types of linking methods: implicit linking and explicit linking. The implicit link is that each directory entry in the file directory contains pointers to the first disk block and the last disk block of the linked file. Explicit linking explicitly stores the pointers of each physical block of the linking file in a linking table in memory.

10. There are two files A and B in MS-DOS, A occupies 11, 12, 16 and 14 four disk blocks; B occupies 13, 18 and 20 three disk blocks. Try to draw the links between the disk blocks in file A and B and the FAT situation.

Answer: As shown in the figure below.

11. What physical structure does the NTFS file system use for files?

Answer: In the NTFS file system, clusters are used as the basic unit of disk space allocation and recovery. A file occupies several clusters, and a cluster belongs to only one file.

12. Assuming that the organization of a file system is similar to MS-DOS, there are 64K pointers in FAT, and the disk block size is 512B. Can this file system guide a 512MB disk?

Solution: 512MB/512B=1M disk blocks, and each disk block should have a pointer to indicate, so there should be 1M pointers, so if there are 64K pointers, it cannot guide a 512MB disk.

13. For quick access and easy update, which file organization method should be used when the data is in the following form.

⑴Infrequently updated, often random access; ⑵Frequently updated, often accessed in a certain order; ⑶Often updated, often random access;

Answer: The above three should adopt (1) sequence structure, (2) index sequence structure, and (3) index structure organization respectively.

14. In UNIX, if the size of a disk block is 1KB, each disk block number occupies 4 bytes, that is, each block can hold 256 addresses. Please convert the byte offsets of the following files to physical addresses.

⑴9999; ⑵18000;⑶420000

Answer: First convert the byte offset of the logical file into the logical block number and the offset within the block, that is, the quotient of [byte offset]/[block size] is the logical block number, and the remainder is the internal block Offset. In FCB, the 0-9th address is a direct address, the 10th address is a primary indirect address, the 11th address is a secondary indirect address, and the 12th address is a triple indirect address.

Then convert the logical block number of the file into a physical block number. Using a multiple index structure, in the index node according to the logical block number

Find the corresponding physical block number by direct index or indirect index.

(1) 9999/1024=9 remaining 783, then the logical block number is 9, directly index the 9th address to get the physical block number, and the offset address within the block is 783.

(2) 18000/1024=17 remaining 592, then the logical block number is 10<17<10+256, the physical block number can be obtained at the 10th address through an indirect index, and the offset address within the block is 592.

(3) 420000/1024=410 remaining 160, then the logical block number is 10+256<410, the primary indirect address can be obtained at the 11th address through the secondary indirect index, and then the secondary indirect address can be obtained from this, and then the physical block number can be found Block number, its offset address within the block is 160.

15. What is an index file? Why introduce multi-level indexes?

Answer: An index file refers to a file that consists of an index table and an entry for each record when the record has a variable length. Index non-sequential files are often referred to simply as index files. The purpose of the index is to make the user's access faster, and the multi-level index structure can effectively manage the index file, which can be processed in multiple levels according to the user's access situation.

16. Explain the hybrid index allocation method used in UNIX systems.

Answer: The hybrid index allocation method refers to the allocation method that combines multiple index allocation methods. It is common to use the combination of direct address and first-level index allocation, or two-level index allocation, or even three-level index allocation. In the index nodes of UNIXSystemⅤ and BSD UNIX, 13 address items are set, namely iaddr(0)~iaddr(12), and all address items are divided into direct address and indirect address.

17. What are the main requirements for directory management?

Answer: Realize access by name, improve the speed of searching directories, file sharing, and allow files with duplicate names.

18. Can the main requirements for directory management be met by adopting a single-level directory? Why?

Answer: No. A single-level directory creates only one directory table in the entire file system, and each file occupies a directory entry, which includes file name, file extension, file length, file type, file physical address, status bits and other file attributes.

A single level can only realize the basic functions of directory management, and cannot meet the requirements of search speed, allowing duplicate names and file sharing.

19. What are the currently widely used directory structures? What are its advantages?

Answer: Modern operating systems use a multi-level directory structure. The basic features are fast query speed, clear hierarchical structure, and easy implementation of file management and protection.

20. What are the advantages of the Hash search method? What are the limitations?

Answer: The Hash search method is that the system converts the file name provided by the user into the index value of the file directory, and then uses this value to search for the directory, effectively improving the search speed of the directory, but the Hash search method is limited to non-wildcard file names.

twenty one. In the Hash retrieval method, how to solve the "conflict" problem?

Answer: When searching for a directory by the Hash method, if the corresponding directory entry in the directory table is empty, it means that there is no specified file in the system. If the file name matches the specified file name, it means that the target file is found, and the physical address of the file is also found. If the corresponding file name found in the directory table does not match, a conflict occurs, and Hash conversion is required to form a new index value, and return to the first step to search again.

twenty two. Try to explain the retrieval process of the linear retrieval method in the tree directory structure, and give the corresponding flow chart.

Answer: In a tree structure directory, when two or more users share a subdirectory or file, connect the shared file or letter path to two or more user directories to facilitate finding the file. At this time, the directory structure is no longer a tree structure, but a directed acyclic graph DGA.

twenty three. A computer system uses the bitmap shown in Figure 6-33 to manage free disk blocks. The size of the disk block is 1KB, now

To allocate a certain amount of disk blocks for a file, try to explain the specific allocation process of disk blocks.

Answer: The process of allocating a number of disk blocks is as follows:

⑴ Scan the bit map sequentially, find the first binary bit with a value of 0, and get the row number i=3 and the column number j=3.

⑵ Convert the found binary digits into corresponding disk block numbers. The block number is: b=(3-1)*16+3=35;

⑶ Modify the bitmap, set map[3, 3]=1, and allocate the disk block.

Similarly, the same method can be used to find the second binary bit with a value of 0, and the row number i=4, the column number j=7, and the corresponding block number is 55, so map[i, j]=1 , and allocate the disk block.

twenty four. The disk file space of an operating system has a total of 500 blocks. If a bitmap with a word length of 32 bits is used to manage the disk space, ask: (1) How many words are needed for a bitmap?

(2) What is the block number corresponding to the jth bit of the i word?

(3) Give the workflow of applying/returning a piece.

Answer: (1) Calculate the number of words required by the bitmap: INT (500/32) = 16 words.

(2) Block number b=(i-1)*32+j

(3) The application process: scan the bitmap sequentially, find and allocate free blocks, and modify the bitmap map[i,j]=1.

Return process: Find the row and column of the recovered disk block in the bitmap, and modify the bitmap map[i,j]=0.

25.Which allocation methods are commonly used to manage free disk space? What distribution method is used in the UNIX system?

Answer: Free list method, free linked list method, bitmap method, group link method. The UNIX system uses the group chaining method

26.What are the advantages of inode-based file sharing?

Answer: The advantage is that when a new shared link is established, the file owner relationship is not changed, only the shared counter of the index node is increased by 1, and the system can learn how many directory entries point to the file. The disadvantage is that the owner cannot delete his own files, otherwise an error will occur.

27.What are the advantages of file sharing based on symbolic links?

A: It is possible to link files in computers anywhere in the world through the network.

28.What fault-tolerant measures are included in the first-level system fault-tolerant technology? What is read-after-write verification?

Answer: The first-level system fault-tolerant technology includes fault-tolerant measures such as double directories, double file allocation tables, and read-after-write verification.

Read-after-write verification is to read the data block from the disk immediately after writing a data block from the memory buffer to the disk each time, and send it to another buffer, and then compare the contents of the buffer with the memory buffer. The data in the area that remains after the write is compared. If the two are consistent, it is considered that the write is successful, and continue to write the next disk block. Otherwise override. If it is still inconsistent after rewriting, it is considered that the disk block is defective, and the data that should be written to the disk block is written into the hot repair redirection area.

29.What fault-tolerant measures are included in the second-level system fault-tolerant technology? Draw a picture to illustrate it.

Answer: The second-level fault-tolerant technology includes two fault-tolerant measures: disk mirroring and disk duplexing. The diagram is as follows:

30.What is a business? How to ensure the atomicity of transactions?

Answer: A transaction is a program unit used to access and modify various data items.

To ensure the atomicity of transactions, it must be required that a transaction must be completed when performing modification operations on a batch of data.

Replace the original data with the modified data, or do not change any of them to maintain the consistency of the original data.

31.What is the purpose of introducing checkpoints? How to recover after the introduction of checkpoints?

Answer: The purpose of introducing checkpoints is to regularize the cleanup of transaction records in the transaction record table.

Recovery processing is implemented by recovery routines. First look up the transaction record table to determine the execution started before the most recent checkpoint

Final business Ti. After finding Ti, return to the search transaction record table, find the first checkpoint record, and start from this checkpoint

At the beginning, go back and search for each transaction record, and use the redo and undo process to process them accordingly.

32.Why introduce shared locks? How to use mutex or shared lock to achieve transaction order?

Answer: The introduction of shared locks is to improve operating efficiency. In the case of setting a mutual exclusion lock and a shared lock for the object, if the transaction Ti wants to perform a read operation on Q, it only needs to obtain the shared lock of Q. If the object Q has been locked by a mutex, Ti must wait; otherwise, it will obtain a shared lock to perform a read operation on Q. If Ti wants to perform a write operation on Q, Ti also needs to acquire Q's mutex. If it fails, it waits; if it succeeds, it acquires a mutex and writes to Q.

33.When there are duplicate files in the system, how to ensure their consistency?

Answer: Two methods can be used: one is to make the same modification to all duplicate files, and the other is to replace all duplicate files with newly modified files.

34.How to retrieve the consistency of the disk block number? What kinds of situations may occur during the inspection?

Answer: In order to ensure the consistency of disk block numbers, first initialize all entries in the counter table to 0, and use N free disk blocks

The first group of counters composed of number counters counts the disk block numbers read from the free disk block table, using N data disk block numbers to count

The second group of counters composed of counters counts the disk numbers read from the file allocation table and assigned to the files. If the corresponding data in the two sets of counts are complementary, the data are consistent, otherwise, an error occurs.

What can happen during inspection:

(1) The count value of the disk block K in the two sets of counters is 0, and the disk block number K should be added to the free disk block table;

(2) The count value of disk block K in the free disk block number counter is 2, and a free disk block number K should be deleted;

(3) The count value of disk block number K in the free disk block number counter is 0, while the count value of disk block number K in the data disk block number counter is

If the count value is greater than 1, the error is serious, and there are events such as data loss, which must be reported to the system immediately for processing.

Chapter VII

1. What types of user interfaces does an operating system include? Which situation do they apply to?

Answer: The operating system includes four types of user interfaces: command interface (divided into online and offline command interfaces), program interface, graphical user interface, and network user interface.

The command interface and graphical user interface support users to use the computer system directly through the terminal, the program interface is provided for users to use when programming, and the network user interface is an interface oriented to network applications.

2. What are the components of the online command interface?

Answer: The online command interface consists of a set of online commands, a terminal handler, and a command interpreter.

3. What types of online commands usually include? What are the main commands of each type?

Answer: Online commands usually include the following types:

(1) System access class, mainly registration commands login and password;

(2) Disk operations, including disk format format, floppy disk copy diskcopy, floppy disk comparison diskcomp and backup commands;

(3) File operation category, including file display type, file copy copy, file comparison comp, file rename rename, file delete erase and other commands;

(4) Directory operations, including subdirectory creation mkdir, directory display dir, subdirectory deletion rmdir, directory structure display tree, current directory change chdir and other commands;

(5) Other commands, including input and output redirection >, <, pipe connection |, filter command /, batch command.bat, etc.

4. What is input and output redirection? Try an example.

Answer: The input of the command is usually taken from the standard input device keyboard, and the command output is sent to the standard output device display terminal. If you set the output direction ">" in the command, followed by the file or device name, the result output of the command will be sent to the specified file or device; if you use the input redirection "<", it will not be from the keyboard but from the redirection Get input information from the specified file or device to the right of the character. This is the redirection of input and output.

5. What is a pipe connection? Try an example.

Answer: Pipeline connection refers to using the output of the first command as the input of the second command, or using the output of the second command as the input of the second command.

The input of the third command, and so on, a pipeline can be formed by more than two commands. On MS-DOS and UNIX

In, "|" is used as the pipe symbol. Its general format is: command1 |command2 | ...|commandn.

6. What is the main purpose of an end device handler? What functions does it have?

Answer: It is mainly used to realize human-computer interaction, and it has the following functions:

(1) Receive characters typed by the user from the terminal; (2) Character buffer, used to temporarily store the received characters; (3) Loopback display; (4) Screen editing; (5) Special character processing.

7. What is the main function of the command interpreter?

Answer: The main function is: give a prompt on the screen, ask the user to input a command, read and recognize the command, go to the entry address of the corresponding command processing program, hand over the control right to the processing program for execution, and finally report the processing result or error The information is sent to the screen display.

8. Try to explain the workflow of MS-DOS command processing program COMMAND.COM.

Answer: The workflow of COMMAND.COM is as follows:

(1) After the system is powered on or reset, the initialization program completes the initialization work for the entire system, automatically executes the Autoexec.bat file, and then transfers the control right to the temporary storage part, and gives a prompt to wait for the user to input commands;

(2) The temporary storage part reads the commands in the keyboard buffer, and judges whether the file name, extension and drive name are correct. If there is an error, it will return an error, and if it is correct, it will search and identify the command;

(3) If it is an internal command, after finding the temporary storage part, it will obtain the entry address of the command processing program from the corresponding entry and execute it; if it is an external command, create a command line and execute the system call exec to load its command processing program, Get the corresponding base address and execute it; if the input command is illegal, an error will be returned;

(4) After the command is completed, the control right is handed over to the temporary storage part and a prompt is given to wait for the user command to be received, then go to (2).

9. What UNIX command should be used to rename an existing file?

Answer: The command to rename the saved file is mv, and its format is: mv oldname newname.

10. What command should be used to move the working directory to a specified point in the directory tree?

Answer: Use the command "cd ..." or "cd ." to move up or shift the current directory until the working directory is moved to the specified point in the directory tree.

11. If you want to append the content of file1 to the end of the original file file2, what command should you use?

答: $catfile1>>file4

12. Try to compare the functions of the mail and write commands. What is the difference?

Answer: The mail command is used as a tool for non-interactive communication between multi-users of UNIX. The write command is the user and the current system

A tool for direct online communication with other users in .

  1. Try to compare general procedure calls and system calls?

Answer: A system call is essentially a special form of a procedure call, which is different from a general procedure call:

(1) The operating status is different. The calling process and the called process of the general procedure call are both user programs, or both are system programs, running in the same system state (user state or system state); the calling process of the system call is the user program in the user state, and the called process It is a system program in system state.

(2) Soft interrupt entry mechanism. A general procedure call can be directly transferred from the calling process to the called process; and the system call does not allow the direct transfer from the calling process to the called process. Generally, through the soft interrupt mechanism, it first enters the operating system kernel, and after the kernel analysis, it can turn to the corresponding command processing. program.

(3) Return and reschedule. After the general procedure call is called, it returns to the calling point to continue execution; after the system call is called, all running processes in the system need to be rescheduled. Return to the calling process to continue execution only if the calling process still has the highest priority.

(4) Nested calls. Both general procedures and system calls allow nested calls. Note that system procedures are nested rather than user procedures.

14. What are system calls? What types does it have?

Answer: System calls refer to a group of subroutines or procedures set in the operating system kernel to implement various system functions, and are provided to user program calls. The main types include:

(1) Process control class. Used for process creation, termination, waiting, replacement, process data segment size change, process identifier or specified process attribute acquisition, etc.;

(2) File manipulation class. Used for file creation, opening, closing, reading/writing and file reading and writing pointer movement and attribute modification, directory creation and index node establishment, etc.;

(3) Process communication class, which is used to implement communication mechanisms such as message passing, shared storage area and information collection mechanism, etc.;

(4) Information maintenance class, which is used to realize the setting and obtaining of date, time and system-related information.

15. How to set the parameters required by the system call?

Answer: There are two ways to set system call parameters:

(1) Send the parameters directly to the corresponding registers. The problem is that the registers are limited, limiting the number of parameters that can be set.

(2) Parameter table method. Put the parameters required by the system call into the parameter table, and then put the table pointer in the register.

16. Explain the processing steps of the system call.

Answer: (1) Set the system call number and parameters.

(2) General processing of system call commands. For example, protect the CPU site, put PSW, PC, system call number, user stack pointer and general registers into the stack, save user-defined parameters, etc. Execute the CHMK command in UNIX to transfer the parameters in the parameter table to the UU-arg() of the User structure; MS-DOS executes the INT21 soft interrupt.

(3) Go to the corresponding command processing program for specific processing according to the system call entry table and specific system call commands.

17. Why is it necessary to open the file with the open system call before accessing the file?

Answer: The system will establish a shortcut path between the user and the file. After the file is opened, the system will return a

A handle or descriptor for the file.

18. Is there a system call specially used to delete files in the UNIX system? Why?

Answer: No. When the user no longer uses this file, use the system call unlink to disconnect, and perform i.link minus 1 operation.

When the result of i.link minus 1 is 0, it means that the file is no longer needed by the user, and the file will be deleted from the file system.

19. What kinds of communication mechanisms are included in the IPC software package? What system calls are set in each communication mechanism?

Answer: Three communication mechanisms are provided in IPC:

(1) Message mechanism. There are msgget, msgsend, msgrcv system calls.

(2) Shared memory mechanism. There are shmget, shmid, shmdt system calls.

(3) Semaphore mechanism. No system calls.

  1. What is trap.S program? What main functions does it perform?

Answer: The trap.S file in the UNIX system V kernel is the interrupt and trap master control program. General for interrupts and traps

sex processing, written in assembly language. trap.S contains the entry addresses of most of the interrupt and trap vectors, whenever

When the system is interrupted or trapped, it usually enters the trap.S program directly.

21. In the UNIX system, what data items are contained in the protected CPU environment?

Answer: When the user program is in the user mode and before executing CHMK (CHange Mode to Kernel), the parameter table required for the system call should be provided in the user space, and the address of the parameter table should be sent to the R0 register. After executing the CHMK command, the processor turns into the core state, and the hardware automatically pushes the processor state long word (PSL), PC and code operand (code) into the user core stack, and takes out trap.S from the interrupt and trap vector table The entry address is transferred to trap.S for execution.

After the trap.S program is executed, the trap type type and the user stack pointer usp are pushed into the user core stack, and part or all of a series of registers in the CPU environment of the interrupted process, such as R0~R11, are pushed onto the stack. Which register contents are pushed on the stack depends on the mask code of the specific register, and each bit of the mask code corresponds to the registers in R0-R11. When a certain bit is set to 1, it means that the contents of the corresponding register are pushed onto the stack.

  1. What program is trap.C? What processing will it do?

Answer: The trap.C program is a C language file that deals with various trap situations, and handles 12 kinds of common problems after traps. Including: determining the system call number, realizing parameter transfer, and transferring to the corresponding system call processing subroutine. After returning to trap.C by the system call subroutine, recalculate the priority of the process, process the received signal, etc.

23. In order to facilitate transfer to the system call processing program, what kind of data structure is configured in the UNIX system?

Answer: The system call definition table sysent[] is configured in the UNIX system, and each structure of the table contains three elements, one is the number of parameters required by the corresponding system call; the other is the number of parameters transmitted by the system call through registers The third is the entry address of the corresponding system call processing subroutine. After the table is set in the system, the corresponding entry can be found from the system call definition table according to the system call number i, and the entry address in the entry can be transferred to the corresponding processing subroutine to complete the specific function of the system call. After execution, return to interrupt and fall into the trap.C program in the master control program, and return to the public processing part before the breakpoint.

Chapter 8 Network Operating System

Chapter 8 Network Operating System

chapter eight

1. According to the network topology, what types of computer networks can be divided into? Try to draw their network topology diagram.

Answer: Computer networks can be divided into star, ring, bus, tree and mesh networks. Their network topology diagram is as follows:

2. Try to explain the composition of the packet switching network.

Answer: It consists of packet switches, network management centers, remote concentrators, subassembly and disassembly equipment, and transmission equipment.

3. What are frame switching and cell switching?

Answer: The frame switching method is developed on the basis of traditional packet switching. The basic unit of transmission is a frame with variable length.

Use the "store and forward" method, that is, every time the frame switcher receives a new frame, it sends the frame to the frame buffer for queuing, according to the frame

The destination address in the frame is forwarded to the next frame switch in the corresponding path.

The cell exchange mode is an improved frame relay exchange mode. When the source frame switch receives the frame from the user equipment, it divides

It is cut into multiple fixed-length cells, and when it is transmitted and exchanged in the entire frame repeater network, the cell is used as the basic unit to reach the target

After the frame switch, it is reassembled into frames.

4. Local area networks can be divided into two categories: basic type and fast type. What kinds of LANs are included in each type?

Answer: Basic LANs are: (1) Ethernet (2) Token Ring

Fast local area network includes: (1) FDDI optical fiber ring network (2) Fast Ethernet 100 BASE-T.

5. What kind of network interconnection equipment should be used to realize homogeneous LAN network interconnection? What functions should it have?

Answer: Homogeneous LAN network interconnection equipment and functions:

(1) Bridge. Function: frame sending and receiving, buffer processing, protocol conversion.

(2) Routers. Functions: unpacking and packing, routing, protocol translation, segmentation and reassembly

6. In order to realize heterogeneous network interconnection, what kind of network interconnection equipment should be used? What functions should it have?

Answer: Use a gateway. Realize heterogeneous LAN interconnection, LAN and WAN interconnection, WAN interconnection, LAN and host interconnection.

7. What two types of data transmission services does the network layer provide to the transport layer? Try to briefly describe them.

Answer: (1) Packet service. The sender's network layer receives the message from the transport layer and matches it with a complete destination address as a unique

The information unit is sent out. Each time a data packet passes through a relay node, a relay node is selected according to a certain algorithm according to the local conditions at that time.

The optimal transmission path is forwarded. There is no need to establish a connection for both receiving and sending using the data packet service.

(2) Virtual circuit service. Before communication, the source host sends a call packet, which contains the full network addresses of the source and destination hosts.

The target host agrees to communicate, and the network layer establishes a virtual circuit between the two parties. In the future communication, you only need to fill in the virtual circuit

Logical channel number; the virtual circuit is removed after communication.

8. What are the specific aspects of the bridge role played by the transport layer?

Answer: (1) Transmission error rate and connection establishment failure rate. (2) Data transmission rate, throughput and transmission delay.

(3) Segmentation and grouping functions.

9. Which layers are included in the TCP/IP model? Briefly explain the main functions of each level.

Answer: There are 4 levels in the TCP/IP model.

(1) Application layer. Corresponding to the OSI high level, it provides users with the services they need. Such as FTP, Telnet, DNS, etc.

(2) Transport layer. Corresponding to the OSI transport layer, it provides end-to-end communication functions for application layer entities. defined for

The two main protocols are connection TCP and connectionless User Datagram Protocol UDP.

(3) Network interconnection layer. Corresponding to the OSI network layer, it solves the communication problem from host to host. There are Internet Protocol IP,

Address Resolution Protocol ARP, Internet Group Management Protocol IGMP and Internet Control Message Protocol ICMP four main protocols.

(4) Network access layer. Corresponds to the physical layer and data link layer of OSI.

10. What is the main role of the IP protocol in the network interconnection layer? Why configure the TCP protocol after the IP protocol?

Answer: (1) The IP protocol is mainly used for interconnection and routing selection between heterogeneous networks. IP provides unreliable, wireless-oriented

The datagram delivery service for the connection.

(2) The TCP protocol provides a connection-oriented, reliable end-to-end communication mechanism. TCP can ensure the security of data transmission than IP

Reliability, even if the network layer fails, TCP can still correctly control connection establishment, data transmission and connection release.

11. Try to explain that in the medium access control MAC sublayer, IEEE 802.2, IEEE802.3, IEEE 802.3u, IEEE

What standards are 802.2z, IEEE 802.5, and IEEE802.6?

Answer: IEEE 802.2 is the standard for logical link control. IEEE 802.3 is the standard for Ethernet.

IEEE 802.3u is the standard for Ethernet. IEEE 802.2z is a standard for Ethernet.

IEEE 802.5 is the standard for Token Ring. IEEE 802.6 is the standard for metropolitan area networks.

12. What is network architecture? What parts does OSI/RM consist of?

Answer: The network architecture refers to the overall design of the communication system, providing network hardware, software, protocols, access control and topology.

For standard. OSI/RM is divided into seven layers from low to high: physical layer, data link layer, network layer, transport layer, session layer, table

display layer and application layer.

13. What is a network protocol? Briefly describe its three elements.

Answer: A network protocol is a set of rules, standards or conventions established for data exchange in a computer network.

The computer network protocol is mainly composed of three parts: semantics, syntax and exchange rules, that is, the three elements of the protocol.

Semantics: specify what the communication parties say to each other, determine the type of protocol elements, such as specifying what control information the communication parties send,

Actions performed and responses returned.

Grammar: Specifies how the communicating parties speak to each other, and determines the format of protocol elements, such as the format of data and control information.

Exchange Rules: Specifies the order in which information is exchanged.

14. How many layers does ISO divide OSI/RM into? What is the main purpose of each layer?

Answer: OSI/RM is divided into 7 layers. The main purpose of each layer is to:

Physical layer: specifies the physical connection standard between network devices, and transparently transmits bit streams between network devices.

Data link layer: Provide reliable data transmission function between adjacent nodes.

Network layer: Routing and communication control in the communication subnet.

Transport layer: Provides reliable communication between two end systems.

Session layer: establish and control the session process between two application entities.

Presentation layer: Provides a unified representation of network data.

Application layer: Provide distributed application environment (ordinary users) and application development environment (network programmers) among network users.

15. What is the main factor for the widespread popularity of the client/server model?

Answer: (1) Modularization and application distribution characteristics (2) Make full use of resources and improve network efficiency

(3) Convenience and system maintenance, strong scalability (4) Concurrent features

16...Describe the interaction between the client and the server.

Answer: The two-layer structure system of C/S mode is: the first layer combines presentation and business logic on the client system; the second layer communicates

A database server is combined over the network. The C/S mode is mainly composed of three parts: client application program, server management program and middleware.

Partial composition.

17. What are the limitations of the two-tier C/S model? How to solve?

Answer: (1) It cannot adapt to the ever-increasing number of applications.

(2) It is necessary to install specific network software on the client computer and the server to realize the interoperability between C/S.

(3) The client directly interacts with the server.

Solution: try to make C irrelevant to S that provides services such as data, and add an intermediate entity between C/S.

18. Why should a three-tier client/server model be adopted in a large-scale information system and Internet environment?

Answer: Because the Internet is developing extremely rapidly, the three-tier client/server model is more suitable for development. Client as Web

Browser, thus forming a three-layer C/S mode of Web browser, Web server and database server.

19. Try to compare the two-tier and three-tier C/S modes.

Answer: The advantages of the three-tier model compared with the two-tier model: (1) It increases the flexibility and scalability of the system.

(2)简化了客户机,降低了系统费用。(3)使客户机安装、配置和维护更为方便。

三层的缺点:(1)软件开发难度大,开发周期长。(2)访问效率低。

20.现代计算机网络有哪些主要功能。

答:计算机网络的主要功能是数据通信和资源共享、系统容错、网络管理、应用互操作功能。

21.试说明在层次式结构的网络中进行数据通信时,信息的流动过程。

答:请求信息从客户机到应用服务器,再到数据服务器,然后数据服务器根据要求向应用服

务器传送信息,再由应用服务器找到客户机。

22.为实现数据通信,计算机网络应有哪些具体功能?

答:连接的建立和拆除、报文的分解和组装、传输控制、流量控制、差错检测与纠正。

23.试说明当前实现文件和数据共享的两种主要方式。

答:以虚拟软盘方式和以文件服务方式实现的数据共享方式。

24.网络管理的主要目标是什么?

答:A.增强网络的可用性。 B.提高网络运行质量。 C.提高网络资源利用率。

D.保障网络的安全性 E.提高网络和社会经济效益。

25.网络管理包括哪几方面的具体功能?

答:配置管理、故障管理、性能管理、安全管理、计费管理。

26.何谓信息“互通性”和信息“互用性”?

答:信息的互通性是指在不同网络结点间实现通信。目前主要利用TCP/IP实现信息互通。

信息的互用性是指在不同网络中的站点间实现信息的互用,即一个网络中的用户能访问另一

个网络文件系统或数据库系统中的文件或数据。

27.何谓电子邮件?它可分为哪几种类型?

答:电子邮件E-mail,标志@,又称电子信箱、电子邮政,是用电子手段提供信息交换的通

信方式。电子邮件服务器分为两种类型,MIME 协议和SMTP 协议。现代E-mail 中可包

含多种不同类型的文件,如文本、图像、音频和视频信息等。

28.文件传输的复杂性表现在哪几方面?如何解决?

答:异构网络下的文件传输,需要在Internet 中建立了统一的文件传输协议FTP。

(1)内部用户FTP。只允许在文件服务器上拥有账户的用户使用FTP服务。

(2) Anonymous FTP. An important means to realize resource sharing on the Internet, allowing non-registered users to copy files.

29. Compare email services and file transfer services.

Answer: E-mail service communicates with network users in all countries and regions in the world by means of E-mail facilities.

The file transfer service is to establish a unified file transfer protocol FTP in the Internet, so that users can transfer files between different hosts.

file copy function.

30. What are the characteristics of the directory service in the network environment?

Answer: A small-scale LAN does not need to provide directory services. For a large-scale enterprise network, it is necessary to provide directory services for network administrators and users.

Record service, play the due role of the network. Directory services should also be able to manage the network services provided by each physical device.

The network services provided to the server may be file/print services, database services, and the like.

  1. What are the key features of Directory Services?

Answer: (1) User management. Ensure that authorized users can easily access various network services, and prohibit illegal users from accessing.

(2) Partitioning and replication. Divide the huge catalog into several partitions and replicate them to multiple servers so that each

Partitions are replicated as close as possible to the users who use these objects most often, and some directory services allow

Put multiple copies of different partitions.

(3) Create extension and inheritance functions. Create is to create a new object in the directory and set attributes. extension to the original

There is an expansion of the directory service function. Inheritance refers to the ability of directory objects to inherit the attributes and rights of other objects.

(4) Multi-platform support function. Due to the differences in the management objects of the directory service, cross-platform capability is required.

  1. What are the characteristics of the Internet?

Answer: (1) wide area (2) extensive (3) high speed (4) comprehensive

33. What is WWW? How is it different from general information retrieval tools?

Answer: WWW (Word Wide Web) is called World Wide Web or Web, and it is the most popular type of information service at present.

It is different from general information retrieval tools in that: general retrieval tools can only find the required files from one host at a time,

And the file data type is single; while Web search can find the required data from multiple hosts at one time, allowing different types,

And form these data into a file.

34. What is BBS? Why it will be welcomed by the majority of Internet users?

Answer: BBS (Bulletin Board System) is an electronic bulletin board. BBS users have expanded to all walks of life, BBS

Various files can be exchanged. Through the BBS system, you can obtain the latest international software and information at any time, and you can discuss calculations with others

Various interesting topics such as computer software, hardware, Internet, multimedia, programming, and medicine can be published on BBS

Advice on recruiting friends, cheap transfers, and company products. As long as you have 1 computer and Internet equipment, you can immediately enter the "timeout

Generation "BBS field, enjoy its incomparable power! Therefore, BBS has been welcomed by the majority of network users.

35. What is domain name service? How many segments does the Internet domain name consist of?

Answer: A domain name is the name of a server or a network system on the Internet. The domain name is in the form of several

Composed of English letters and numbers, separated into several parts by ".", such as cctv.com is a domain name.

A complete domain name consists of two or more phrases, and the parts are separated by a period ".". The last "."

The right part is called top-level domain name (TLD) or first-level domain name, and the left part of the last "." is called second-level domain name (SLD),

The left part of the second-level domain name is called the third-level domain name, and so on, each level of domain name controls the allocation of its lower-level domain name.

36. What is domain name resolution? How is the most basic domain name resolution method implemented?

A: Domain name resolution is the process of converting domain names to corresponding IP addresses. A domain name corresponds to only one IP address, multiple

A domain name can be resolved to an IP address at the same time. Domain name resolution needs to be completed by a dedicated domain name resolution server DNS.

The process of domain name resolution: when the application process needs to map a host domain name to an IP address, the domain name resolution function is called

Put the domain name to be converted in the DNS request and send it to the local domain name server in the form of UDP message. After the domain name is found, the

The corresponding IP address is returned in the response message. If the domain name server cannot answer the request, the domain name server sends a request to the root domain

The name server sends a request to resolve, find all the second-level name servers below, and so on, until the requested

The domain name is assigned an IP value and returned.

37. In order to support the services provided by the Internet, what software should be configured in the operating system?

Answer: WEB browsers should be configured, such as IE, firefox, Chrome, etc. Special services can be installed on demand

corresponding software.

38. What is browser/server mode? What are the basic functions of browsers and servers?

Answer: The browser/server model is B/S structure or Browser/Server structure. Only install and maintain one server

Server, the client uses the browser Browse software. Using mature WWW technology, combined with multiple script languages

(VBScript, JavaScript...) and ActiveX technologies are brand-new software system construction technologies.

In the B/S architecture system, the browser sends requests to many servers distributed on the network, and the server sends requests to the browser.

The request is processed and the information required by the user is returned to the browser. And data request, processing, result return and dynamic web page

Generation, database access and application program execution are all done by Web Server. As Windows will browse

The processor technology is embedded in the operating system, and this structure has become the preferred architecture of today's application software.

The main features of the B/S structure are wide distribution, convenient maintenance, simple development, strong sharing, and low overall cost. but the data

Security, high server requirements, slow data transmission, significantly reduced software personalization characteristics, it is difficult to achieve special features in the traditional mode

special functional requirements.

A browser is a browser that can display the content of HTML files on a web server or file system, and allow users to interact with these files

An interactive software. A server is a highly available computer on a network that provides various services to client computers.

Chapter 9 System Security

Chapter nine

1. What are the aspects of the complexity of system security?

Answer: (1) Multifaceted: There are multiple risk points in large-scale systems, and each point includes three aspects of physical, logical, and management security.

(2) Dynamics: With the continuous development of information technology and the emergence of attack methods, system security issues are dynamic.

(3) Hierarchy: System security involves many aspects and is quite complicated, which needs to be solved by system engineering methods.

(4) Appropriateness: Provide appropriate safety goals according to actual needs to achieve.

2. What types of threats to system security are there?

A: False identity, data interception, denial of service, modification of falsified information, denial of operation, interruption of transmission, traffic analysis.

3. How can attackers compromise software and data?

Answer: Data interception, modification of information, falsification of information, interruption of transmission

4. What are the levels of computer system security classified by the Trustworthy Computer System Evaluation Criteria?

Answer: This standard divides the computer system security degree into 8 levels, including D1 (lowest security level), C1 (free security protection level),

C2 (controlled access control level), B1, B2, B3, A1, A2.

5. What are symmetric encryption algorithms and asymmetric encryption algorithms?

Answer: Symmetric encryption is also called private key encryption, which refers to an encryption algorithm that uses the same key for encryption and decryption. The encryption key can be decrypted from

Derived from the encryption key, the decryption key can also be deduced from the encryption key. In most symmetric algorithms, encryption and

The decryption key is the same, also known as secret key algorithm or single key algorithm.

Asymmetric encryption algorithm requires two keys: public key (publickey) and private key (privatekey). public key and private

Key pairing, if the data is encrypted with the public key, it can only be decrypted with the corresponding private key.

6. What are transposition and permutation algorithms? Give an example to illustrate the permutation algorithm.

Answer: The transposition method refers to rearranging the order of bits or characters in plaintext to form ciphertext according to certain rules, while the characters themselves remain

constant. The replacement method is to replace one character with another according to certain rules to form ciphertext.

Such as: How are you? Substituting each character with the following letter is Ipx bsf zpv?

7. Try to explain the process of DES encryption.

A: There are four stages:

In the first stage, the plaintext is divided into 64-bit plaintext segments, and the initial transposition is performed to obtain X0, which is shifted to the left by 32 bits and recorded as L0.

Shift right by 32 bits, denoted as R0.

In the second stage, 16 iterations are performed on X0, each time using a 56-bit encryption key Ki.

In the third stage, the left 32 bits and the right 32 bits of the result processed through 16 iterations are exchanged.

In the fourth stage, the inverse transformation of the initial translocation is performed.

8. Try to explain the main characteristics of asymmetric encryption.

Answer: The asymmetric encryption algorithm is complex, the security depends on the algorithm and the key, and the encryption and decryption speed is slow. Symmetric cryptosystems have only

The security of the key is the security of the key, and asymmetric encryption has a public key and a private key, which is more secure.

9. Explain how to encrypt and decrypt confidential data signatures.

Answer: (1) The sender A can encrypt the plaintext P with his own private key Kda to obtain the ciphertext DKda(P).

(2) A encrypts DKda(P) with B's public key Keb, obtains EKeb(DKda(P)) and sends it to B.

(3) After receiving it, B first decrypts it with the private key Kdb to get DKda(EKeb(DKda(P)))=DKda(P).

(4) B decrypts DKda(P) with A's public key Kea to get EKeb(DKda(P))=P.

10. What is the role of digital certificates? An example is used to illustrate the process of applying, issuing and using digital certificates.

Answer: Digital certificates, also known as public key certificates, are used to prove the identity of the communication requester.

The application, issuance and use process of digital certificates are as follows:

(1) User A first applies for a digital certificate from CA, and A should provide identity proof and public key A that he wants to use.

(2) After CA receives the application report from A, if it accepts the application, it will issue A a digital certificate, which includes the public key

A and the CA certifier's signature and other information, and use the CA private key to encrypt all information (that is, digitally sign the CA).

(3) When user A sends information to B, A uses the private key to encrypt the message (digital signature), and sends it to B together with the certificate.

(4) In order to decrypt the received digital certificate, user B must apply to CA to obtain CA's public key B. CA receives user

After B's application, it can decide to send the public key B to user B.

(5) User B uses CA public key B to decrypt the digital certificate, confirms that the digital certificate is the original, and

The public key A is obtained in the book, and it is confirmed that the public key A is the key of user A.

(6) User B uses public key A to decrypt the encrypted message sent by user A, and obtains the real plaintext of the message sent by user B.

11. What is link encryption? What are its main features?

Answer: Link encryption is the process of encrypting data transmitted on communication lines between adjacent nodes in the network. feature is:

(1) The message transmitted on the physical channel between adjacent nodes is cipher text, and the message on all intermediate nodes is plain text.

(2) Use different encryption keys for different links.

12. What is end-end encryption? What are its main features?

Answer: End-to-end encryption is the encryption of transmitted data at the source host or the FEP layer of the front-end machine (from the transport layer to the application layer).

Features: (1) The body of the message during the entire network transmission process is cipher text, and the information is translated into plain text after reaching the target host.

(2) The control information in the header cannot be encrypted, otherwise the intermediate node cannot know the target address and control information.

13. What methods can be used to determine the authenticity of the user's identity?

Answer: (1) password combination; (2) physical sign (3) biological sign (4) public key

14. In the identity authentication technology based on the password mechanism, what requirements should generally be met?

A: Moderate password length, automatic disconnection, covert loopback display, logging and reporting.

15. What types of authentication technologies based on physical signs can be subdivided?

Answer: There are mainly two authentication technologies based on magnetic cards or IC cards.

16. What types of smart cards can be divided into? Can any of these be used in user possession-based authentication techniques?

Answer: Smart cards are divided into types such as memory cards, microprocessor cards, and password cards.

Memory cards have no security features and cannot be used for authentication based on user possession; microprocessor cards and password cards use

Encryption measures can be used for authentication based on user possessions.

17.被选用的人的生理标志应具有哪几个条件?请列举几种常用的生理标志。

答:被选用的生理标志应具有三个基本条件,即足够的可变性、稳定性好、不易伪装。

常用的生理标志是指纹、视网膜组织、声音、手指长度等。

18.对生物识别系统的要求有哪些?一个生物识别系统通常是有哪几部分组成的?

答:对生物识别系统的要求有:性能满足要求(抗欺骗和防伪防攻击)、能被用户接受、系

统成本适当。

一个生物识别系统通常由注册和识别两部分组成。注册部分配有一张用户注册表,识别

部分要对用户进行身份认证和生物特征识别。

19.试详细说明SSL所提供的安全服务。

答:SSL称为安全套接层协议,用于提供Internet 上的信息保密,身份认证服务,目前SSL

已成为利用公开密钥进行身份认证的工业标准。

SSL 提供的安全服务有:申请数字证书(服务器申请数字证书、客户申请数字证书)

和SSL握手协议(身份认证、协商加密算法和协商加密密钥)。

20.什么是保护域?进程与保护域之间存在着什么动态联系?

答:保护域是进程对一组对象访问权的集合,规定了进程能访问对象和执行的操作。

进程与保护域之间的动态联系是指进程的可用资源集在个生命周期中是变化的;进程运行在

不同的阶段可以根据需要从一个保护域切换到另一个保护域。

21.试举例说明具有域切换权的访问控制矩阵。

答:在访问矩阵中增加几个对象,分别作为访问矩阵中的几个域,当且仅当switch 包含在

access(i,j)时,才允许进程从域i切换到域j。例如在下图中,域D1和D2对应的项目中有S,

故允许域D1中的进程切换到域D2 中,在域D2和D3 中也有S,表示D2 域中进行的进程

可切换到域D3中,但不允许该进程再从域D3返回到域D1。

22.如何利用拷贝权来扩散某种访问权?

答:如果域i 具有对象j 的某访问权acess(i,j)的拷贝权,则运行在域i的进程可将其访问权

acess(i,j)扩展到访问矩阵同一列中的其它域,即为运行在其它域的进程也赋予关于同一对象

的同样访问(acess(i,j))。

23. How to use ownership to add or delete certain access rights?

Answer: If domain i has ownership of object j, then a process running in domain i can add or delete any entry in column j

any access rights. Or the process can add or remove any access rights on object j from processes running in any other domain.

24. What is the main purpose of increased control? Give an example to illustrate the application of control rights.

Answer: Control rights are used to change the access rights of processes running in a domain with respect to different objects. If a domain access right access(i,j)

contains control C in domain Di, a process running in domain Di can change any process running in domain Qj with respect to any object

any access rights.

25.What is an Access Control List? What is an access table?

Answer: The access control list refers to dividing the access matrix by columns, and establishing an access control list ACL for each column, consisting of ordered pairs (domain,

It is composed of weight set), which is a means to ensure the security of the system.

The access authority table refers to dividing the access matrix into rows, and each row constitutes an access authority table.

26.How does the system use the access control list and access permission table to protect files?

Answer: When a process tries to access an object for the first time, it must first check the access control list to see if it has permission to access the object.

If there is none, access is denied and constitutes an exceptional event; otherwise, access is allowed and access rights are established for it to

It can quickly verify the legitimacy of its access. This access is revoked when the process no longer has access to the object.

27.What is a virus? What kind of harm does it have?

Answer: A virus is a computer program that destroys computer functions or data, affects the use of computer systems and

A set of computer instructions or program code that is capable of self-replication.

Hazards of computer viruses: Occupy system space, occupy processor time, destroy files, and make the machine run abnormally.

  1. What are the characteristics of a computer virus? How is it different from normal programs?

Answer: The characteristics of computer viruses are parasitic, contagious, concealed and destructive.

The difference between it and general programs is that virus programs are usually not independent programs, and have the ability of self-replication and rapid spread.

Contamination, trying to hide itself, the basic purpose of existence is to be destructive.

29. What is a file virus? Explain how a file virus infects a file.

Answer: File-type virus means that it attaches to the normal program in a parasitic way. When the virus breaks out, the original program can still run normally.

As a result, the user cannot find out the virus that has been dormant for a long time.

The way file viruses infect files is the way of active attack and infection during execution.

30. What kinds of concealment methods have virus designers adopted to allow viruses to evade detection?

Answer: (1) Hidden in the directory and registry space. (2) Hidden in the in-page fraction of the program.

(3) Change the data structure used for disk allocation. (4) Change the list of bad sectors.

31. What methods can users take to prevent viruses?

Answer: (1) Regularly back up important software and data in external storage (2) Use a highly secure operating system

(3) Use genuine software (4) Use high-performance anti-virus software

(5) Do not easily open emails from unknown sources (6) Regularly check external storage and remove viruses

32. Try to explain the virus detection method based on the virus database.

Answer: (1) Build a virus database (2) Scan executable files on the hard disk

chapter Ten

1. What are the characteristics of UNIX system?

Answer: Openness, multi-user and multi-tasking environment, powerful and efficient functions, rich network functions, and support for multi-processors.

2. Try to explain the kernel structure of the UNIX system.

Answer: The UNIX kernel structure is divided into four layers: the lowest layer is the hardware, the second layer is the OS core, and the second layer is the OS and user interface

Shell and compiler, etc., the highest layer is the application program.

3. What parts does the PCB in the UNIX system contain? Use diagrams to illustrate the relationship between the parts.

Answer: The PCB in the UNIX system includes process table items, U area, system area table, and process area table.

4. What parts does the process image contain? What is the role of the upper and lower dynamic parts at the system level?

Answer: The process image includes user context, register context, and system-level context.

The function of the dynamic part of the system-level context is that when the core state is entered due to an interrupt or a system call, the core puts a register

The register context is pushed onto the core stack. When the system call is exited, the core pops the register context. When the context is switched, the core

Will push the context of the old process and pop the context of the new process.

5. What are the main system calls used for process control in UNIX systems? What are their respective main functions?

Answer: The main system calls used for process control are:

(1) fork system call: used to create a new process

(2) exit system call: realize process self-termination

(3) exec system call: change the original code of the process

(4) wait system call: suspend the calling process and wait for the child process to terminate

6. What needs to be done to create a new process?

Answer: Assign a process table entry and process identifier to the new process; check the number of processes running at the same time; copy the process table entry

The data; the child process inherits all the files of the parent process; creates a process context for the child process; the child process executes.

7. Why use process self-termination? How to implement exit?

Answer: In order to recover the resources occupied by the process in a timely manner, the process task should be canceled as soon as possible after the task is completed. The Unix kernel implements exit

Process self-termination. When the parent process creates a child process, it should arrange exit at the end of the child process so that the child process can terminate itself.

The specific operations to achieve exit are: closing soft interrupts, recycling resources, writing accounting information, and setting the process to a dead state.

8. What scheduling algorithm is used in the UNIX system? How to determine the priority number of a process?

Answer: The UNIX system adopts the process scheduling algorithm of dynamic priority number round-robin. Priority number determination formula:

Priority number = (recently used CPU time/2) + basic user priority number

9. After entering the sleep process, what should the kernel do?

Answer: After entering the sleep process, the core first saves the operating level of the processor when entering sleep, and increases the operating priority of the processor

The level shields all interrupts, puts the process into a sleep state, saves the sleep address in the process table entry, and puts the process into a sleep state.

sleep queue. If the sleep of the process is uninterruptible, the process will sleep peacefully after the process context switch. When the process is woken up

And is scheduled for execution, will restore the processor's run level to the value when it went to sleep, and allow the processor to be interrupted at this time.

10. Explain the similarities and differences between the two mechanisms of signals and interrupts.

Answer: The difference: interrupts have priority, but signals do not, and all signals are equal; signal handlers run in user mode,

The interrupt handler runs in the core state; there is also a timely interrupt response, and the signal response is usually delayed.

The same point: both adopt asynchronous communication; when a signal or interrupt request is detected, the program being executed is suspended and turned to

To execute the corresponding processing program; return to the original breakpoint after processing; both signals and interrupts can be masked.

11. Briefly explain the signal sending and signal processing functions in the signal mechanism.

Answer: Signal sending means that the sending process sends the signal to a certain bit of the signal field in the proc structure of the target process.

Signal processing function: first use the system call signal (sig, func) to preset the signal processing method,

When func=1, this type of signal is shielded; when func=0, the process terminates itself after receiving the signal; when func is not 0 or 1, func

The value is used as a pointer to the signal processing program. The system changes from the core state to the user state and executes the corresponding processing program.

Return to the breakpoint of the user program.

12. What is a pipeline? What is the main difference between unnamed pipes and named pipes?

Answer: A pipe is a share that connects the writing process and the reading process and allows them to communicate in a producer-consumer manner

file or pipe file. An unnamed pipe is a temporary file with no path name created using the system call pipe()

File, only the process that calls pipe and its descendants can recognize this file descriptor and use this file (pipe) to process

communication; well-known pipes are created using the mknod system call and can exist in the file system for a long time with a path name

A file that other processes can know exists and access using that pathname.

13. What rules should be followed when reading and writing pipes?

Answer: (1) Limitation on the size of the pipe file

(2) Process mutual exclusion

(3) When the process writes to the pipeline, it satisfies the producer operation rules on the pipeline space

(4) When the process reads the pipeline, it satisfies the consumer operation rules in the pipeline space

14. What system calls are there in the message mechanism? Describe their use.

Answer: The system calls in the message mechanism are msgctl( ), msgsnd( ), and msgrcv( ).

The msgctl( ) system call operates on the specified message queue.

msgsnd( ) system call to send a message.

The msgrcv( ) system call reads a message from the specified message queue.

15. What system calls are there in the shared storage mechanism? Briefly describe their purpose

Answer: The system calls in the shared storage mechanism include shmget(), shmctl(), and shmat().

shmget( ) is used to create a shared storage area, and provide parameters such as the name key of the area and the length size of the shared storage area.

The shmctl( ) system call is used to query the state information of the shared memory area.

The shmat( ) system call is used to attach the shared storage area to a process virtual address shmaddr given by the user,

And specify whether the access attribute of the storage area is read-only or read-write.

16. What work does the core need to complete when executing the shmget system call?

Answer: (1) Check the shared storage area table, if the key entry is found, it indicates that the area has been established, and return the shared area descriptor shmid;

(2) If the specified key entry is not found, and the flag is IPC_CREAT and the parameter size value is within the system limit value

, allocate a system free area as the page table area of ​​the shared area, allocate corresponding memory blocks, and fill these block numbers into the page table;

(3) The core allocates an empty entry for the newly created shared area in the shared storage ______x îÿl/___ storage area and system area table, and fills in the storage area

keyword and size, the starting address of the shared area page table, the pointer to the system area entry, etc., and finally returns the shared area descriptor shmid.

17.在信号量机制中有哪些系统调用?说明它们的用途。

答:在信号量机制中的系统调用是senget( )和semop( )。semget()用于用户建立信号量集。

semop( )用来对信号量集进行操作。

18.核心是如何对信号量进行操作纵的?

答:核心根据sem_op改变信号量的值,分3 种情况:

若sem_op值为正,则将其值加到信号量值上,相当于V 操作;若sem_op值为负,

相当于P 操作,若信号量值大于操作值的绝对值,则核心将一个负整数加到信号量值上,

否则核心将已操作的信号量恢复到系统调用开始时的值;若(sem_flg&IPC_NOWAIT)为真,

便立即返回,否则让进程睡眠等待.。

19.为实现请求调页管理,在UNIX系统中配置了那些数据结构?

答:UNIX 系统V 将进程的每个区分为若干个虚页,这些虚页可以分配到不邻接的页框中,

为此设置了一张页表。其中每个表项中,记录了每个虚页和页框的对照关系。

20.当访问的缺页是在可执行文件上或在对换设备上时,应如何将它们调入内存?

答:(1)缺页在可执行文件上。如果欲访问虚页对应磁盘块描述表项类型是file,表示该缺

页尚未运行,其拷贝在可执行文件中,核心应将该页调入内存。调入过程是:根据对应系统

区表项中的索引结点指针,找到该文件的索引节点,把该页的逻辑块号作为偏移量,查找索

引结点中的磁盘块号表,找到磁盘块号,将该页调入内存。

(2)缺页在对换设备上。核心先为缺页分配一内存页,修改该页表项,指向内存页,并将

页面数据表项放入相应散列队列中,把该页从对换设备上调入内存,当I/O操作完成时,核心把请求调入该页的进程唤醒。

21.在将一页换出时,可分成哪几种情况?应如何处理这些情况?

答:分三种情况:(1)若在对换设备上有被换出页的拷贝,内容未改,则核心只将该页页

表项中的有效位清零,将引用计数减1,将该页框数据表项放入空闲链表中。

(2)若在对换设备上没有换出页的拷贝,则将该页写到对换设备上。先将所有要换出页链

into the page chain to be swapped out. When the number of pages on the chain reaches the specified value, these pages are written into the swap area.

(3) There is a copy of the swapped-out page on the swap device, but the content of the page has been modified, and the kernel should release the original possession of the page on the swap device

space, and then copy the page to the swap device again to make the copied content up-to-date.

twenty two. How to allocate and reclaim the character buffer?

Answer: When the character device is performing I/O, the kernel uses the getcf process to obtain the free buffer from the free character buffer queue.

If the queue is empty, there is no buffer to allocate, return; otherwise, get a free buffer from the head of the queue, and set the buffer pointer

bp is returned to the caller. Take mutual exclusion measures, increase the processor priority to 6 at the beginning of the process, and obtain an empty buffer

The priority of the processor is restored after the zone.

When the buffer is no longer needed, call the putcf procedure to free the buffer. The input parameter is a pointer to a buffer that is no longer needed

The area pointer bp sends the buffer back to the head pointed to by the head pointer cfreelist of the free buffer queue. If there is an application at this time

If the process is blocked by emptying the buffer, wake it up. Access to the free buffer queue shall be mutually exclusive.

twenty three. Try to explain the composition of the block buffer and the composition of the block buffer pool.

Answer: Each block buffer in the UNIX system has two parts: one is the data buffer for storing data; the other is

The sub-block is the buffer control block, which is used to store the management information of the corresponding buffer.

Disk block buffer pool structure: (1) free list (2) hash queue.

twenty four. What is the main difference between getblk() and getblk(dev, blkno) process?

Answer: getblk() is used to obtain any free buffer from the free buffer queue. getblk(dev, blkno) is used to refer to

Specify the device dev and the disk block whose number is blkno to apply for a buffer. Only if data is to be written to the contents of a particular disk block

When not in the buffer, the getblk process is called to allocate an empty buffer.

25.Try to explain the main functions of the gdopen, gdstart, gdstartegy, and gdintr processes.

Answer: gdopen is used to open the disk drive, the input parameter is the device number, and there is no output parameter.

gdstart is used to assemble the various registers in the disk controller and then start the disk controller.

gdstartegy puts the specified buffer header at the end of the disk controller's I/O queue and starts the disk controller.

Gdintr is used for the disk interrupt processing process when the disk I/O transmission is completed and an interrupt request is issued.

26.What read and write processes are set up in a UNIX system? What is the main difference between the two?

Answer: The reading process includes the general reading process bread and the advance reading process breada.

The writing process includes the general writing process bwrite, the asynchronous writing process bawrite and the delayed writing process bdwrite.

27.Try to explain the characteristics of the UNIX file system?

Answer: A. The organization of the file system is in the form of a hierarchical tree structure B. The physical structure of the file is a mixed index file structure

structure

C.Use group chaining method to manage free disk blocks. D.Introduced the file retrieval technology of the index node.

28.What form is the physical structure of the file in the UNIX system? Try an example.

Answer: The physical structure of UNIX files adopts a hybrid index file structure.

When searching for a file, as long as the index node of the file is found, the disk block of the file is obtained by direct or indirect addressing.

29.How to convert the logical block number of a file into a physical disk block number in the UNIX system?

Answer: The addressing methods are different, and the conversion methods are also different.

(1) Direct addressing, only used when the logical block number of the file is not greater than 10. For example, the access object is the data at byte offset 9999. Then 9999/1024=9 and 783, then the logical block number of the file is 9, and the direct index address item i-addr(9)

The block number, where the offset address in the block is 783 bytes is the 9999 bytes of the file.

(2) An indirect address is used only when the logical block number of the file is greater than 10 but not greater than 10+256. For example, the access object is the data at byte offset 18000. Then 18000/1024=17 remaining 592, then the logical block number is 10<17<10+256, which requires an indirect index. First obtain the block number of the indirect address from i-addr(10), then subtract 10 from the block number of the logical disk, obtain the subscript of the address item of the indirect address block number according to the logical block number in the primary address, and then obtain the final physical disk block number. The logical disk block number here is 17, and the block number obtained from i-addr(10) is set to 428, then 17-10=7 is the primary address number, and the corresponding disk block number is the physical disk block number to be found , the offset address in the block is 592, which is the 18000 byte of the file.

(3) Multiple indirect addresses are used only when the logical block number of the file is greater than 266 but not greater than 64266. If the access object is a word

Data at section offset 420000. Then the logical block number is 266<410<64266, through the secondary indirect index. exist

Obtain the block number of the indirect address once in i-addr (11), and then subtract 266 from the block number of the logical disk, and obtain

Go to the subscript of the indirect address block number address item, and then get the secondary indirect address from it, and then find the corresponding physical block number and offset address in the block

160 is the 420000 bytes of the file.

30.How to allocate and reclaim disk index nodes?

Answer: The allocation process of ialloc is: first check whether the super block is locked, check whether the i-node stack is empty, and start from the idle i-node number stack

Allocate an i node and initialize it, fill in the relevant file attributes, allocate memory i nodes, and reduce the total number of disk i nodes by 1,

Return after concatenating the superblock modification flag.

The recovery process iffree is: first check whether the super block is locked; check whether the i-node number stack is full; if the i-node number stack is not full,

Push the number of recovered i-nodes into the stack, and add 1 to the current number of idle i-nodes; return after setting the super block modification flag.

31.When do I need to construct catalog entries? What does the core need to do?

Answer: When a user (process) wants to create a new file, the kernel should construct a directory entry in its parent directory file;

When a process needs to share a file of another user, the kernel will also create a directory entry for the user who shares the file. called by the system

The creat process completes the construction of directory entries.

32.When should a directory entry be deleted? What work does the core have to do?

Answer: For a file exclusively owned by a certain user, when the user does not need it, it should be deleted to free up storage space. Core must be completed

The completed work is to use unlink to disconnect. When the nlink value is 0, the system will automatically delete the file.

Guess you like

Origin blog.csdn.net/skybulex/article/details/108013900