Computer composition principles interview frequently asked questions--postgraduate entrance examination and postgraduate entrance examination re-examination

前言: Hello everyone, my name is Dream. This year, I joined the Artificial Intelligence Major of Shandong University(Experience Post) Now I will sort out my own knowledge points for preparing for professional courses and share them with everyone. I hope it can help everyone! This is a summary of key knowledge. If you want to see all the content, I have packed it here for everyone and you need to pick it up: Full set of materials for postgraduate re-examination + 408 professional course knowledge summary and thinking Map (click to click) , quick look at the content:
1. A complete set of materials for postgraduate re-examination, including summer camp preparation materials (including contact teacher templates, self-interview Introduction, submission materials and recommendation letter templates) and pre-recommendation without preparation materials;
2. 408 professional course review materials, mind maps;
3. Advanced mathematics, linear algebra , probability, discrete review materials and mind maps;
4. Summary of frequently asked questions about machine learning algorithms;
5. Summary of algorithms and training tips for computer test scores;
6. Must-read knowledge before the interview, summary of 408 frequently asked questions and common interview questions.
Insert image description here

Table of contents

Chapter 1, Computer System Overview

Knowledge framework:

Insert image description here

1. The concept of von Neumann machine and stored program?

Von Neumann proposed the concept of "stored program" when studying the EDVAC machine. The idea of ​​"stored program" laid the basic structure of modern computers. Various computers based on this concept are commonly known as von Neumann machines. , its characteristics are as follows:

  1. The computer hardware system consists of five major components: arithmetic unit, memory, controller, input device and output device.
  2. Instructions and data are stored in memory on an equal footing and can be accessed by address.
  3. Both instructions and data are represented in binary code.
  4. Instructions are composed of operation codes and address codes. The operation codes are used to indicate the nature of the operation, and the address codes are used to indicate the location of the operands in the memory.
  5. Instructions are stored sequentially in memory. Usually, instructions are executed sequentially, and under certain conditions, the order of execution can be changed based on the operation results or based on set conditions.
  6. The early von Neumann machine was centered on the arithmetic unit, and the input/output device transmitted data through the arithmetic unit and memory. Modern computers are memory-centric.
    The concept of "stored program" means that instructions are input into the main memory of the computer in advance in the form of code, and then the first instruction of the program is executed according to its first address in the memory, and then the instructions are executed according to the first address in the memory. The program executes other instructions in the specified sequence until the program execution ends. Model machine of von Neumann structure:
    Insert image description here

2. How does a computer work?

The working process of the computer is divided into the following three steps:

  1. Load programs and data into main memory.
  2. Convert source program into executable file.
  3. Execute instructions one by one starting from the first address of the executable file.

3. In computer system architecture, what is compilation? What is an explanation?

There are two ways to translate , one is compilation and the other is interpretation.
Before executing a program written in a compiled language, a special compilation process is required to compile the program into a machine language file, such as an exe file. This is not necessary if the source program remains unchanged and will be run later. Re-translated. Interpretation is different. Interpreted language programs do not need to be compiled. They are translated when the program is run. Each sentence of the translation is executed and no target program is generated. In this way, the interpreted language must be translated every time it is executed, which is relatively inefficient.
.java file->Compilation->.class file, compiled into .class bytecode, .class needs to be interpreted by jvm, and then interpreted and executed. Java is very special. Java programs need to be compiled but are not directly compiled into machine language, that is, binary language. Instead, they are compiled into bytecode (.class) and then executed in an interpreted manner. The class after the Java program is compiled is an intermediate code, not an executable program exe, not a binary file, so an intermediary is needed to interpret the intermediate code during execution, which is the so-called Java Virtual Machine (JVM).
The C language compilation process is divided into four steps:
1. From the .c file to the .i file. This process is called preprocessing. #include the headers included Copy the file directly to hello.c; replace the macro defined by
#define, and delete the useless comment part in the code, etc.
2, by .i file to .s file, this process is called compilation
3. From .s file to .o file, this process is called assembly
4. From .o file to an executable file. This process is called linking, which binds the translated binary and the libraries that need to be used
Insert image description here

4. Describe the instruction execution process?

The address of the first instruction in the program is placed in the PC. The first instruction is fetched according to the PC. After decoding and execution steps, the computer's functional components are controlled to cooperate to complete the function of this instruction and calculate the value of the next instruction. address. Use the newly obtained instruction address to continue reading the second instruction and execute it until the end of the program. The following takes the fetch instruction (that is, fetching the operand from the storage unit indicated by the instruction address code and sending it to the ACC of the arithmetic unit) as an example. The information flow is as follows:

1) Fetch instructions: PC ->MAR—>M—>MDR—>IR

According to the PC, the instruction is fetched to the IR, and the contents of the PC are sent to the MAR. The contents in the MAR are directly sent to the address line. At the same time, the controller sends the read signal to the read/write signal line. The main memory reads from the specified address according to the address and read signal on the address line. The memory unit reads out the instruction and sends it to the data line. The MDR receives the instruction information from the data line and transmits it to the IR.

2) Analysis command: OP(IR)—>CU

Instructions are decoded and control signals are sent. The controller generates corresponding control signals based on the operation codes of the instructions in the IR and sends them to different execution components. In this example, the IR is a fetch instruction, so the read control signal is sent to the control line of the bus.

3) Execution instructions: Ad(IR)—>MAR—>M—>MDR—>ACC

Fetch operations. Send the address code of the instruction in IR to MAR, and send the content in MAR to the address line. At the same time, the controller sends the read signal to the read/write signal line to read the operand from the designated storage unit of the main memory, and sends it to MDR through the data line, and then transmitted to ACC. In addition, every time an instruction is fetched, preparations must be made for fetching the next instruction to form the address of the next instruction, that is, (PC)+1 —> PC.

5. What are the main performance indicators of the computer?

1. Machine word length

The machine word length refers to the number of binary data bits that the computer can process for an integer operation (ie, fixed-point integer operation). It is usually related to the number of register bits and adder of the CPU. Therefore, the machine word length is generally equal to the size of the internal register. The longer the word length, the larger the range of number representation and the higher the calculation accuracy. Computer word sizes are usually chosen as integer multiples of bytes (8 bits).

2. Data path bandwidth

Data path bandwidth refers to the number of bits of information that the data bus can transmit in parallel at one time. The data path width mentioned here refers to the width of the external data bus, which may be different from the internal data bus width of the CPU (the size of the internal register). The data transmission path formed by connecting each subsystem through the data bus is called a data path.

3. Main memory capacity

Main memory capacity refers to the maximum capacity of main memory that can store information. It is usually measured in bytes. The storage capacity can also be expressed by the number of words x the word length (such as 512Kx16 bits). Among them, the number of bits in MAR reflects the number of storage units, and the number of bits in MAR reflects the maximum value of the addressable range (not necessarily the storage capacity of the actual memory).

4. Operation speed

(1) Throughput and response time.
• Throughput: refers to the number of requests processed by the system per unit time. It depends on how quickly information can be entered into memory, how quickly the CPU can fetch instructions, how quickly data can be retrieved from or stored in memory, and how quickly the results can be sent from memory to an external device. Almost every step is related to the main memory, so the system throughput mainly depends on the access cycle of the main memory.
• Response time: refers to the waiting time from the user sending a request to the computer until the system responds to the request and obtains the required results. Typically includes CPU time (time spent running a program) and wait time (time spent on disk access, memory access, I/0 operations, operating system overhead, etc.).
(2) Main frequency and CPU clock cycle.
• CPU clock cycle: usually the beat pulse or T cycle, which is the reciprocal of the main frequency. It is the smallest unit of time in the CPU. Each action requires at least 1 clock cycle.
• Main frequency: the frequency of the machine’s internal clock.
(3)CPI (Clock cycle Per Instruction), which is the number of clock cycles required to execute an instruction.

Chapter 2, Data Representation and Operation

Knowledge framework:

Insert image description here

6.IEEE754 standard floating point numbers

Insert image description here

7. Floating point type and type conversion in C language

The float and double types in C language correspond to IEEE 754 single-precision floating-point numbers and double-precision floating-point numbers respectively. The long double type corresponds to extended double-precision floating point numbers, but the length and format of long double vary with compiler and processor type. Forced type conversion will occur in the assignment and judgment of equations in C programs, and char->int->long->double 和float->double is the most common. The range and precision are from small to large from front to back, and there is no loss in the conversion process.

  1. Although overflow will not occur when converting from int to float, int can retain 32 bits and float can retain 24 bits. There may be data rounding, which will not occur if converted from int to double.
  2. When is converted from int or float to double, because double has more significant digits, it can be retained The exact value.
  3. When is converted from double to float, overflow may occur because float represents a smaller range. Also, since there are fewer significant digits, there may be rounding.
  4. When is changed from float 或double 转换为int, because int has no decimal part, the data may be truncated toward 0 (only the integer part is retained), affecting accuracy. Also, since the representation range of int is smaller, overflow may occur.

8. In computers, why do we use binary to represent data?

In terms of feasibility, using binary system, there are only two states: 0 and 1. There are many electronic devices that can represent the two states of 0 and 1, such as the on and off of switches, the on and off of transistors. , the positive and negative residual magnetism of magnetic components, the high and low potential levels, etc., can all represent two numbers of 0 and 1. Using binary, electronic devices have the feasibility of implementation.
In terms of simplicity of operation, binary numbers have fewer arithmetic rules and simple operations, which greatly simplifies the hardware structure of the computer arithmetic unit (the decimal multiplication table has 55 formulas, and There are only 4 rules for binary multiplication).
Logically speaking, since binary 0 and 1 correspond to false and true in logical algebra, there is a theoretical basis for logical algebra. It is very convenient to use binary to represent binary logic. nature.

9. Numerical range of each encoding method

Insert image description here

Chapter 3, Storage System

Knowledge framework:

Insert image description here

10. Multi-level storage system?

In order to solve the three mutually restricting contradictions of large capacity, high speed and low cost of storage systems, in computer systems, multi-level memory structures are usually used. From top to bottom in the figure, the price becomes lower and lower. , the speed is getting slower and slower, the capacity is getting larger and larger, and the frequency of CPU access is getting lower and lower
Insert image description here
In fact, the storage system hierarchy is mainly reflected in the "Cache-main memory" level and the "main memory-auxiliary memory" level. The former mainly solves the problem of mismatch in speed between CPU and main memory, and the latter mainly solves the capacity problem of the storage system. In the storage system, Cache and main memory can directly exchange information with the CPU, and the auxiliary memory can directly exchange information with the CPU. Storage must exchange information with the CPU through the main memory; the main memory can exchange information with the CPU, Cache, and auxiliary storage.
The main idea of ​​the memory hierarchy is that the memory at the upper level serves as the lower level. Memory cache. From the perspective of the CPU, the "Cache-main memory" level speed is close to Cache, but the capacity and price are close to thousands of main memories. From the "main memory-auxiliary storage" level analysis, its speed is close to that of main memory, and its address capacity and bit price are close to that of auxiliary storage. This solves the contradiction between speed, capacity and cost.
In the continuous development of the "main memory-auxiliary memory" level, a virtual storage system is gradually formed. In this system, the address range programmed by the programmer corresponds to the address space of the virtual memory. . For computer systems with virtual memory, the address space available for programming is much larger than the main memory.

11. Semiconductor random access memory?

The main memory is implemented by DRAM, and the layer near the processor (Cache) is implemented by SRAM. They are both volatile memories. As long as the power supply is cut off, the originally saved information will be lost. DRAM costs less per bit than SRAM and is slower than SRAM. The price difference is mainly because more silicon is required to make DRAM. ROM is a non-volatile memory.

1. Working principle of SRAM

The physical device that stores a binary bit is usually called a storage element, which is the most basic component of memory. When the address code is the same, multiple storage cells constitute a storage unit. A collection of several memory cells constitutes a memory bank. The storage elements of static random access memory (SRAM) use bistable flip-flops (six-transistor MOS) to store information, so even after the information is read out, it still maintains its original state without regeneration (non-destructive output) ). SRAM has fast access speed, but low integration and high power consumption, so it is generally used to form a cache memory.

2. Working principle of DRAM

Different from the storage principle of SRAM, dynamic random access memory (DRAM) uses the charge on the gate capacitor in the storage element circuit to store information. The basic storage element of DRAM usually uses only one transistor, so its density is much higher than that of SRAM. . DRAM uses address multiplexing technology, the address line is 1/2 of the original, and the address signal is transmitted twice in rows and columns. Compared with SRAM, DRAM has the advantages of easy integration, low bit price, large capacity and low power consumption. However, the access speed of DRAM is slower than that of SRAM, and it is generally used to form a large-capacity main memory system. The charge on the DRAM capacitor generally only lasts for 1~2ms, so even if the power supply is turned off, the information will disappear automatically. For this reason, it must be refreshed every certain time, usually 2ms, this time is called the refresh cycle. There are three commonly used refresh methods: centralized refresh, distributed refresh and asynchronous refresh.

3. Characteristics of read-only memory (ROM)

ROM and RAM are both memories that support random access, and SRAM and DRAM are both volatile semiconductor memories. Once there is information in the ROM, it cannot be easily changed and will not be lost even if the power is turned off. It is a read-only memory in the computer system. ROM devices have two significant advantages:
1) The structure is simple, so the bit density is higher than that of read-write memory.
2) It is non-volatile, so it has high reliability.

12. What technologies can improve the CPU memory access speed?

In order to improve the speed of CPU accessing memory, technologies such as dual-port memory and multi-module memory can be used. They are both parallel technologies. The former is space parallel and the latter is time parallel.

1.Dual port RAM

Dual-port RAM means that the same memory has two independent ports, left and right, each with two independent sets of address lines, data
lines and read and write control lines. Allows two independent controllers to access the storage unit simultaneously and asynchronously, as shown in the figure. When the addresses of the two ports are different, read and write operations on the two ports will not conflict.

Insert image description here

2.Multi-module memory

In order to improve the memory access speed, multi-module memories are often used. Commonly used ones include single-body multi-word memory and multi-body low-order interleaved memory.
Note: The CPU is faster than the memory. If n instructions are fetched from the memory at the same time, the CPU resources can be fully utilized and the running speed can be improved. Multi-body interleaved memory is proposed based on this idea.
(1) Single multi-word memory
The characteristic of a single multi-word system is that there is only one memory bank in the memory, and each memory unit stores m words. The bus width is also m words. To read m words in parallel at a time, the addresses must be arranged sequentially and in the same storage unit. A single multi-word system fetches m instructions from the same address in one access cycle, and then sends the instructions one by one to the CPU for execution, that is, every 1/m access cycle, the CPU fetches an instruction from the main memory. Obviously, this increases the bandwidth of the memory and improves the working speed of the single memory.
Disadvantages: Instructions and data must be stored continuously in the main memory. Once a transfer instruction is encountered, or the operands cannot be stored continuously, this method The effect is not obvious.
(2) Multi-body parallel memory
Multi-body parallel memory is composed of multi-body modules. Each module has the same capacity and access speed, and each module has independent read and write control circuits, address registers and data registers. They can work in parallel or cross-cutting. Multi-body parallel memory is divided into two types: high-order interleaved addressing (sequential mode) and low-order interleaved addressing (interleaved mode).

13.Cache

Cache memory: A high-speed cache memory in a computer. It is a small but high-speed memory located between the CPU and the main memory DRAM (Dynamic Random Access Memory). It is usually composed of SRAM (Static Random Access Memory). static memory). The function of Cache is to increase the rate of CPU data input and output. Cache capacity is small but fast, memory speed is low but capacity is large. By optimizing the scheduling algorithm, the performance of the system will be greatly improved, as if the storage system capacity is equivalent to the memory and the access speed is similar to the Cache.
Cache usually uses associative memory.
The basis for using Cache to improve system performance is the principle of program locality
Replacement algorithm:
When Cache generates an access failure After a hit, the corresponding data should be read into the CPU and Cache at the same time. But when the Cache is full of data, new data must replace (eliminate) some old data in the Cache. The most commonly used replacement algorithms are random algorithm, first-in-first-out algorithm (FIFO) and least recently used algorithm (LRU).
Write operation:
Because it is necessary to ensure that the data cached in the Cache is consistent with the content in the memory, the write operation of the Cache is relatively complex, and the commonly used write direct method is , write-back method and notation method.
Mapping method with main memory:
Direct mapping: Main memory data blocks can only be loaded into the unique location in the Cache
Fully associative mapping: The main memory data block can be loaded into any location in the Cache
Group associative mapping: The Cache is divided into several groups, and a data block can be loaded into any location in the group. a location

14.Virtual memory

Basic concepts of virtual memory
Virtual memory refers to a memory system that has the function of requesting transfer and replacement, and can logically expand the memory capacity
Page virtual memory
Page management: it divides the virtual storage space and the actual space into fixed-size pages. Each virtual page can be loaded into different actual pages in the main memory. Location. In page storage, the logical address of the processor consists of two parts: the virtual page number and the page address. The actual address is also divided into two parts: the page number and the page address. The address mapping mechanism converts the virtual page number into the main memory. Actual page number.
Segment virtual memory
Segment management: A storage management method that allocates main memory according to segments. It is a modular storage management In this way, each user program module can be divided into a segment. The program module can only access the main memory space corresponding to the segment assigned to the module. The segment length can be set arbitrarily and can be enlarged and reduced.< a i=6>Segment page virtual memory Segment page management: It is a combination of the above two methods. It divides the storage space into segments according to logical modules, and each segment is divided into several pages. Storage is performed through a segment table and several page tables. The length of the segment must be an integer multiple of the page length, and the starting point of the segment must be the starting point of a certain page.

Chapter 4, Command System

Knowledge framework:

Insert image description here

15. Basic concepts of instruction pipeline

Basic principles of pipelines:
Pipeline technology is a technology that significantly improves the speed and efficiency of instruction execution. The method is: after the instruction fetch is completed, the next instruction can be fetched without waiting for the instruction to be executed. If the interpretation process of an instruction is further subdivided, for example, it is divided into five sub-processes: fetching, decoding, memory access, execution, and writing back, and five sub-components are used to process these five sub-processes respectively. In this way, it only needs to be processed in the previous instruction. After the first sub-process is processed and the second sub-process is entered, the first sub-process of the second instruction begins to be processed in the first sub-component. As time goes by, this overlapping operation can eventually reach five sub-components. Operate the sub-processes of five instructions at the same time.
Typical five-stage pipeline data path:
Insert image description here
Features of the pipeline method:
Compared with the traditional serial execution method , using the pipeline method has the following characteristics:

  1. Decompose a task (an instruction or an operation) into several related subtasks, each subtask is executed by a specialized functional component, and rely on multiple functional components to work in parallel to shorten the execution time of the program.
  2. There must be a buffer register, or latch, behind each functional segment component of the pipeline. Its function is to save the execution results of this pipeline segment for use by the next pipeline segment.
  3. The time of each functional section in the assembly line should be as equal as possible, otherwise it will cause blockage and flow interruption.
  4. The efficiency of the pipeline can only be achieved when the same kind of tasks are continuously provided, so the tasks processed in the pipeline must be continuous tasks. In processors that work in a pipeline manner, it is necessary to try to provide continuous tasks for the pipeline in many aspects such as software and hardware design.
  5. The pipeline needs to have load time and drain time. Loading time refers to the time from when the first task enters the pipeline to when it is output from the pipeline. Empty time refers to the time from the last task entering the pipeline to the output pipeline.

Factors affecting pipeline performance
1) Structural correlation occurs when multiple instructions compete for the same resource at the same time to form a conflict
Solution: (1 ) Pause for one clock cycle (2) Set the data memory and instruction memory separately
2) Data dependence occurs when instructions are overlapped in the pipeline and occur when subsequent instructions need to use the execution results of the previous instructions. .
Solution: (2) Pause for one clock cycle (2) Data bypass: directly input the ALU calculation result of the previous instruction into the next instruction
3) Control dependence is caused when the pipeline encounters branch instructions and other instructions that change the PC value.
Solution: (1) Delayed transfer technology. (2)Transfer prediction technology

16.Comparison between CISC and RISC (complex instruction set and reduced instruction set)?

Insert image description here

Chapter 5, Central Processing Unit

Knowledge framework:

Insert image description here

18. What are the functions of CPU?

The central processing unit (CPU) consists of arithmetic units and controllers. Among them, the function of the controller is responsible for coordinating and controlling the instruction sequence of the computer components to execute the program, including fetching instructions, analyzing instructions and executing instructions; the function of the arithmetic unit is to process data.
The specific functions of the CPU include:

  1. command control. Complete the operations of fetching instructions, analyzing instructions, and executing instructions, that is, sequence control of the program.
  2. Operational controls. The function of an instruction is often realized by a combination of several operating signals. The CPU manages and generates the operation signals of each instruction fetched from the memory, and sends various operation signals to the corresponding components, thereby controlling these components to act according to the instructions.
  3. time control. Control the timing of various operations. Time control should provide appropriate control signals for each instruction in time sequence.
  4. Data processing. Perform arithmetic and logical operations on data.
  5. Interrupt handling. Handle abnormal situations and special requests that occur during computer operation.

19. The more pipelines there are, the higher the degree of parallelism. Does the more pipeline segments there are, the faster the instruction execution?

Error for the following reasons:

  1. The additional overhead between pipeline buffers increases. Each pipeline segment has some additional overhead for functions such as transferring data between buffers and performing various preparations and sendings. These overheads lengthen the entire execution time of an instruction. When instructions are logically dependent on each other, the overhead is even greater.
  2. The control logic between pipeline segments has become more numerous and complex. The control logic for pipeline optimization and memory (conflict handling) will increase greatly with the increase of pipeline segments, which may cause the logic for control between pipeline segments to be more complex than the control logic of the segment itself.

20. Several concepts related to instructions and data

  1. When two consecutive instructions read the same register, a read-after-read correlation will occur. This correlation will not affect the pipeline.
  2. When an instruction reads the register written by the previous instruction, read-after-write correlation occurs, which is called data correlation or true correlation and affects the pipeline. Only RAW correlations are possible in sequential flow pipelines.
  3. When the previous instruction of an instruction reads/writes the output register of the instruction, write-after-read and write-after-write correlations will occur. In a non-sequential flow pipeline, RAW correlation, WAR correlation and WAW correlation may occur. The instruction dependencies that have the greatest impact on the pipeline are data dependencies.

Chapter 6, Bus

Knowledge framework:

Insert image description here

21. What are the benefits of introducing a bus structure?

Introducing the bus structure has the following main advantages:
1) It simplifies the system structure and facilitates system design and manufacturing.
2) The number of connections is greatly reduced, which facilitates wiring, reduces the size, and improves the reliability of the system.
3) It facilitates interface design. All devices connected to the bus use similar interfaces.
4) It facilitates the expansion, update and flexible configuration of the system, and Yiqian realizes the modularization of the system.
5) It facilitates the software design of the device. The software of all interfaces operates on different interface addresses.
6) It facilitates fault diagnosis and maintenance, and can also reduce costs.

22. Bus related concepts

1. What are the categories of system buses according to the different types of information they transmit? Is it one-way or two-way?

1) It is divided into data bus, address bus and control bus.
2) Data bus: transmits data information between functional components, bidirectional transmission;
3) Address bus: used to indicate the source data or The address of the main memory unit where the destination data is located. One-way: issued by the CPU
4) Control bus: used to send various control signals. For a single wire in the control bus, it is unidirectional, that is, it can only be sent from one component to another component. In a set of control buses, there are inputs and outputs. Therefore, the control bus can also be regarded as bidirectional.

2. What are bus width, bus bandwidth, bus multiplexing, and number of signal lines?

1) Bus width: The number of data buses, usually a multiple of 8. It is an important indicator to measure the performance of a computer system;
2) Bus bandwidth: that is, the bus data transmission rate, the maximum number of bytes that can be transmitted per second on the bus.
3) Bus multiplexing: One signal line transmits two signals in a time-shared manner. For example, time-division multiplexing of data bus and address bus;
4) Number of signal lines: the sum of the number of lines of address bus, data bus and control bus.

Chapter 7, Input and Output System

Knowledge framework:

Insert image description here

23.What conditions should the CPU meet in response to an interrupt?

1) The interrupt mask trigger set inside the CPU must be open.
2) When the peripheral has an interrupt request, the interrupt request trigger must be in the "1" state and maintain the interrupt request signal.
3) The peripheral (interface) interrupt enable trigger must be "1" so that the peripheral interrupt request can be sent to the CPU.
When the above three conditions are met, the CPU responds to the interrupt in the last status cycle at the end of the current instruction.

24.What do interrupt response priority and interrupt processing priority mean respectively?

The interrupt response priority is determined by the hardware queuing line or the query sequence of the interrupt query program, and cannot be changed dynamically; the interrupt processing priority can be changed by the interrupt mask word, which reflects whether the interrupt being processed is faster than the newly occurring interrupt. The processing priority is low (the mask bit is "0", open to new interrupts). If so, the interrupt being processed is suspended and transferred to the new interrupt for processing. After processing, it returns to the interrupt that was just suspended to continue processing.

25. What is the relationship between the three concepts of vector interrupt, interrupt vector, and vector address?

1) Interrupt vector: Each interrupt source has a corresponding handler. This handler is called an interrupt service program, and its entry address is called an interrupt vector. The interrupt service program entry addresses of all interrupts form a table, called an interrupt vector table; some machines also form a table with the jump instructions of the interrupt service program entry, called an interrupt vector jump table.
2) Vector address: The memory address or index value of each entry in the interrupt vector table or interrupt vector jump table is called the vector address or interrupt type number.
3) Vectored interrupt: refers to a technology or method of identifying the source of interrupts. The purpose of identifying the interrupt source is to find the address of the entry address of the interrupt service routine corresponding to the interrupt source, that is, to obtain the vector address.

26. What is the difference between program interruption and subroutine call?

The fundamental difference between the two is mainly reflected in the service time and service objects.

  1. The time when the subprogram call process occurs is known and fixed, that is, the main program calls the subprogram process when the call instruction (CALL) in the main program is executed, and the location of the call instruction is known and fixed. The time when the interrupt process occurs is generally random. When the CPU receives an interrupt request from the interrupt source while executing a certain main program, the interrupt process occurs. The interrupt request is generally generated by the hardware circuit, and the request time is random. It can also be said that calling subroutines is arranged in advance by the programmer, while executing the interrupt service routine is randomly determined by the system working environment.
  2. The subroutine completely serves the main program, and the two belong to a master-slave relationship. When the main program needs a subprogram, it calls the subprogram and brings the call result back to the main program to continue execution. The interrupt service program and the main program are generally unrelated. There is no question of who serves whom. The two are in a parallel relationship.
  3. The process of calling subroutines by the main program is entirely a software process and does not require special hardware circuits; the interrupt processing system is a combined software/hardware system and requires special hardware circuits to complete the interrupt processing process.
  4. Several levels of subroutine nesting can be achieved. The maximum number of nesting levels is limited by the stack size opened by the computer's memory. The number of interrupt nesting levels is mainly determined by the interrupt priority. Generally, the number of priority levels is not very large.

本期推荐:

Silicon Story AI writing expert: Learn to write from scratch using ChatGPT, see the logic of AI writing clearly, and explain the art of AI writing (purchase channel)
Insert image description here

Guess you like

Origin blog.csdn.net/weixin_51390582/article/details/132588460