Summary of classic exam questions for computer postgraduate entrance examination

operating system

operating system features?

– Shared: resources can be used by multiple concurrently executing processes
– Concurrent: multiple processes can be processed at the same time interval, requiring hardware support
– Virtual: mapping physical entities into multiple virtual devices
– Asynchronous: process execution stops and starts , the execution speed of each process may be different, but the OS needs to ensure that the result of each execution of the process is the same

The three components of the process?

Program segment, data segment, PCB (Process Control Block)

What is the difference between concurrency and parallelism?

Concurrency: same interval Parallel: same moment

Process switching process?

Keep the processor context -> update the PCB -> move the PCB into the corresponding queue (ready, blocked) -> select another process and update its PCB -> update the memory management data structure -> restore the processor context

process communication

1. Low-level communication mode
PV operation (semaphore mechanism).
– P: wait(S) primitive, apply for S resources – V: signal(S) primitive
, release S resources tool operation shared space) – message passing (data exchange between processes with formatted messages, there are intermediate entities, which are divided into direct and indirect two types, and the bottom layer is realized by two primitives of sending and receiving messages) – pipeline communication ( two There is a special pipeline file in the middle of each process, the input and output of the process are all through the pipeline, half-duplex communication)



What is a tube?

A module consisting of a set of data and the definition of operations on this set of data. Only one process can use the monitor at the same time, that is, the monitor is mutually exclusive. After the process releases the monitor, it needs to wake up the process on the waiting queue that applied for the monitor resource. A process can only access the data in it by entering the monitor and using the operations inside the monitor.

Necessary condition for deadlock?

– Mutual exclusion condition: resources can only be occupied by one process at a certain time
– Non-deprivation condition: resources held by a process cannot be forcibly taken away by other processes before voluntary release
– Request and occupation conditions: deadlock process must be both Holding resources and applying for resources
– cyclic waiting conditions: there is a waiting chain, mutual application, not releasing each other
Avoid deadlock: do not let cyclic waiting conditions occur. Use the banker's algorithm.

Difference between deadlock and starvation?

– Both are resource allocation issues
– Deadlock is waiting for resources that will never be released, while resources requested by starvation will be released, but will never be allocated to itself
– Once a deadlock occurs, there must be multiple deadlock processes, and There can be only one starving process
- the hungry process may be in the ready state, and the deadlocked process must be a blocked process

Difference between process and thread?

A thread is called a lightweight process, and a process contains threads. A process has an independent memory space, and different processes cannot directly share other process resources, while threads in the same process share the process memory space; compared with processes, thread switching has less system overhead; a process is the smallest unit of resource allocation, A thread is the smallest unit of program execution.

What does FCB contain?

File pointer: the last read and write position.
File Opens: How many processes have opened this file.
File disk location.
File access permissions: create, read-only, read-write, etc.

Page replacement algorithm?

Optimal permutation algorithm OPT

The selected pages to be eliminated will be pages that will never be used in the future, or pages that will not be accessed for the longest time, so that the lowest page fault rate can be guaranteed. However, since people currently cannot predict which of the thousands of pages of the process in the memory will not be accessed for the longest time in the future, this algorithm cannot be realized.

First-in-first-out replacement algorithm FIFO

Priority is given to eliminating pages that enter memory the earliest, that is, pages that reside in memory the longest. The algorithm is simple to implement, just link the pages transferred into the memory into a queue according to the order, and set a pointer to always point to the earliest page. However, this algorithm does not adapt to the rules when the process is actually running, because in the process, some pages are often accessed.

The least recently used algorithm LRU

The page that has not been visited for the longest time recently is eliminated. It believes that the page that has not been visited in the past period of time may not be visited in the nearest future. The algorithm sets an access field for each page to record the time elapsed since the page was last accessed, and when eliminating a page, select the one with the largest value among the existing pages to be eliminated.

Clock Algorithm LOCK

The performance of the LRU algorithm is close to that of OPT, but it is difficult to implement and has a large overhead; the implementation of the FIFO algorithm is simple, but its performance is poor. Therefore, the designers of the operating system have tried many algorithms, trying to approach the performance of LRU with a relatively small overhead. These algorithms are all variants of the CLOCK algorithm.
A simple CLOCK algorithm is to associate an additional bit with each frame, called the usage bit. When a page is first loaded into main memory, the frame's use bit is set to 1; when the page is subsequently accessed, its use bit is also set to 1. For the page replacement algorithm, the set of candidate frames for replacement is viewed as a circular buffer, and a pointer is associated with it. When a page is replaced, this pointer is set to point to the next frame in the buffer. When a page needs to be replaced, the operating system scans the buffer for a frame with the use bit set to 0. Whenever a frame with a use bit of 1 is encountered, the operating system resets the bit to 0; if at the beginning of the process, all frames in the buffer have use bits of 0, the first encountered frame is chosen Replacement of a frame; if the usage bits of all frames are 1, the pointer makes a full circle in the buffer, sets all the usage bits to 0, and stays at the original position, replacing the page in the frame. Because the algorithm checks the status of each page cyclically, it is called the CLOCK algorithm, also known as the Not Recently Used (NRU) algorithm.

Improved Clock Algorithm

The improved CLOCK algorithm is superior to the simple CLOCK algorithm in that pages that have not changed are preferred when replacing. This saves time since modified pages must be written back before being replaced.

Batch job scheduling algorithm?

First come first serve FCFS

It is to schedule jobs according to the natural order in which each job enters the system. The advantage of this scheduling algorithm is that it is simple and fair to implement. Its disadvantage is that it does not take into account the comprehensive usage of various resources in the system, which often makes users of short jobs dissatisfied, because the waiting time for short jobs may be much longer than the actual running time.

Shortest Job First (SJF)

It is to prioritize and process short jobs. The so-called short job refers to the short running time of the job. When the job is not running, the actual running time of the job cannot be known, so the user needs to submit the estimated running time of the job when submitting the job.

Highest Response Ratio Priority HRN

FCFS may cause users dissatisfaction with short jobs, and SPF may make users dissatisfied with long jobs, so HRN is proposed to select the job with the highest response ratio to run. Response ratio = 1 + job waiting time / job processing time.

Multi-level Queue Scheduling Algorithm

Each job specifies an integer representing the priority level of the job. When a new job needs to be transferred from the input well to the memory for processing, the job with the highest priority number is given priority.

Process scheduling algorithm?

The process has three states: blocking ready, running.

first in first out FIFO

Select according to the order in which processes enter the ready queue. That is, whenever process scheduling is entered, the leader process of the ready queue is always put into operation.

Time slice rotation algorithm RR

A scheduling algorithm for time-sharing systems. The basic idea of ​​round robin is to divide the processing time of the CPU into time slices, and the processes in the ready queue run a time slice in turn. When the time slice ends, the process is forced to give up the CPU, and the process enters the ready queue to wait for the next scheduling. At the same time, the process scheduling selects a process in the ready queue and assigns it a time slice to start running.

Highest priority algorithm HPF

Process scheduling assigns the processor to the ready process with the highest priority at a time. The highest priority algorithm can be combined with different CPU methods to form a preemptive highest priority algorithm and a non-preemptive highest priority algorithm.

Multi-level Queue Feedback Algorithm

The combination of several scheduling algorithms is a multi-level queue method.

Disk scheduling algorithm?

First come first serve FCFS

Shortest seek time first SSTF

Let the requester closest to the current track start the disk drive, that is, let the job with the shortest search time be executed first, regardless of the order in which the requester arrives, thus overcoming the magnetic arm movement in the first-come-first-serve scheduling algorithm oversized problem

Scanning Algorithm SCAN

Always start from the current position of the magnetic arm, and go along the moving direction of the magnetic arm to select the visitor of the cylinder closest to the current magnetic arm. If there is no request for access along the direction of the magnetic arm, the moving direction of the magnetic arm is changed. The movement of the magnetic arm under this scheduling method is similar to the scheduling of the elevator, so it is also called the elevator scheduling algorithm.

Cyclic scanning algorithm C-SCAN

The round-robin scanning scheduling algorithm is improved on the basis of the scanning algorithm. The magnetic arm is changed to a single movement, from outside to inside. The current position starts to select the visitor of which cylinder is closest to the current magnetic arm along the moving direction of the magnetic arm. If there is no request to access along the direction of the magnetic arm, go back to the outermost and access the job request with the smallest cylinder number.

FAT(File Allocation Table)?分配

All disk block numbers assigned to files are placed in FAT to record the physical location of files.

What is the difference between an interrupt and a system call?

How do interrupts work?
Interrupt request-interrupt response-breakpoint protection-execute interrupt service routine-breakpoint recovery-interrupt return

Interrupt: Solving the mismatch between processor speed and hardware speed is a necessary condition for multiprogramming. Each interrupt has its own digital identification. When an interrupt occurs, the contents of the instruction counter PC and the processor status word PSW are automatically pushed into the processor stack, and the new PC and PSW interrupt vectors are also loaded into their respective registers. . At this time, the PC contains the entry address of the interrupt handler for the interrupt, which controls the program to turn to the corresponding processing. When the interrupt handler is executed, the last iret (interrupt return) of the program controls the recovery of the calling program. environment of. The difference between interrupts and system calls: Interrupts are generated by peripherals, unintentional, and passive system calls are generated by applications requesting the operating system to provide services, intentional, and active. To enter the kernel mode from the user mode through an interrupt. (Contact) Interrupt process: interrupt request, interrupt response, breakpoint protection, execution of interrupt service program, breakpoint recovery, interrupt return system call process: the application program requests a system call when executing in user mode, interrupts, enters kernel mode from user mode, and executes in kernel mode The corresponding kernel code.

The meaning and method of virtual storage?

According to the two characteristics of mutual exclusion and space and time locality of program execution, only a part of the job is allowed to be loaded when loading, and the other part is stored on the disk. In external storage. Such a small main memory space can also run a job larger than it. Commonly used virtual storage technologies include paging and segmented storage management.

The file system used by windows and linux?

window:fat32.linux:ext2,fat32.

Principles of computer composition

What is a von Neumann structure?

A memory structure that combines program instruction memory and data memory.
The five components are: input and output, computing unit, control unit, and storage unit.
Input devices for inputting data and programs;
memory for storing programs and data; arithmetic
units for processing data;
controllers for controlling program execution;
output devices for outputting processing results.

The role of cache

Connect the CPU and memory.

What is the difference between cache and register?

The register is a part of the CPU that is temporarily stored, and the cache is used as an acceleration belt between the high-speed CPU and the low-speed main memory.

command system

CISC (Complex Instruction Set Computer) complex instruction system calculator is a microprocessor that executes more types of computer instructions. RISC (Reduced Instruction Set Computer) is a simplified instruction system calculator, which is a microprocessor that executes fewer types of computer instructions.

assembly line

The repetitive process is divided into several sub-processes to complete.

Bus and I/O

The bus refers to the connection line for data communication, including address, data, and control instructions.
I/O input/output (Input/Output) is divided into two parts: IO device and IO interface.
I/O methods are: DIO (Direct I/O), AIO (Asynchronous I/O, asynchronous I/O) , Memory-Mapped I/O (memory-mapped I/O), etc., different I/O methods have different implementation methods and performances, and different I/O methods can be selected according to the situation in different applications.

DMA

DMA (Direct Memory Access) is an important feature of all modern computers, allowing hardware devices of different speeds to communicate with each other without relying on the CPU's heavy interrupt load. Otherwise, the CPU needs to copy each fragment's data from the source to the scratchpad, and then write them back again to the new location. During this time, the CPU is unavailable for other work.

Java

The characteristics of java?

Compile once and run everywhere, no pointers, fully object-oriented, object-oriented (encapsulation, inheritance, polymorphism).

Java common terms

JavaEE: Java Platform Enterprise Edition is a standard platform launched by Sun for enterprise-level applications.
J2EE: Java 2 Platform, Enterprise Edition, is the former name of JavaEE.
JDBC: Java DataBase Connectivity, Java database connection.
JNDI: Java Naming and Directory Interface, provides a directory system that enables developers to access resources by name.
EJB: Enterprise JavaBean, used to build a manageable server component.
Servlet: A server program written in Java that can dynamically modify web content.
JSP: Java Server Pages, a dynamic web page technology standard led by Sun. JSP is deployed on the web server, which can respond to client requests and return corresponding content.
RMI: Remote Method Invocation, so that the client can call the method on the remote server like a local object. Unlike RPC (Remote Procedure Call Protocol), RMI is only applicable to Java, and the returned results are also different. RML needs to be serialized during file transfer and converted to binary to be transmitted by servlet
XML: Extensible Markup Language, used to transmit and store data
JMS: Java Message Service, a service for message-oriented middleware on the Java platform, used in Delivery of services between distributed systems or applications.
JTA: Java Transaction API, transaction management components.
Weblogic: A commercial JavaEE server launched by Oracle

How does java handle object allocation and release?

Java divides the memory into stack space for storage. The new space in the heap does not need to be reclaimed by itself, and it is automatically garbage collected.

JVM

Unlike C++, which requires programmers to manually release memory, Java has a virtual machine, so Java does not require programmers to actively release memory, but uses the virtual machine's own garbage collector (Garbage Collector-GC) to recycle objects. Due to the existence of a virtual machine, the Java language has achieved platform independence. On any platform, the code is converted into a bytecode file to run the code in the virtual machine under the platform.

Glossary:

Memory area distribution
Virtual machine stack: store the stack frame when each method is executed, and a method call to completion corresponds to the process of pushing and popping the stack frame in the virtual machine stack.

Native method stack: Similar to the virtual machine stack, but it serves the native method in Java. The "stack memory" usually referred to refers to the collective name of the virtual machine stack and the local method stack.

Program Counter: An indicator of the line number of the bytecode being executed by the current thread, on which the bytecode interpreter depends for its work. Occupies a small memory space and does not appear OOM.

Heap: The so-called "heap memory". The largest piece of memory managed by the JVM is shared by all threads. The only function is to allocate memory space for object instances. In the generational recovery algorithm, the new generation and the old generation are in the heap.

Method area (also called permanent generation): not in the heap, but shared by each thread, storing data such as class information, constants, static variables, code compiled by a real-time compiler, etc. that have been loaded by the JVM. Among them is the constant pool.

Another: direct memory, which does not belong to the JVM memory area, is closely related to NIO and is not limited by the size of JVM memory.

JVM garbage collection mechanism

When does garbage collection happen?

GC is essentially a daemon thread, which constantly detects whether there are unreachable objects in the heap and releases memory, so we cannot predict when GC will happen. GC destroys objects by calling the object's finalize() method.

Determination of Unreachable Objects: Root Search Algorithms. There are a series of set GC Roots in the JVM. When there is no reference chain from an object to any GC Root, it means that the object is unreachable.

Garbage Collection Algorithm in JVM

1. Mark-clear algorithm

The most basic algorithm, GC will judge whether the object in the heap is unreachable, if it meets the cleaning conditions (check whether it is necessary to execute the finalize() method for the object, the necessary standard is whether the object has been called the finalize method or the object Have you overridden the finalize() method, because finalize() can only be called once), then mark the object and put the object in the F-queue queue. At this point, unless the object regains a reference in the finalize() method, it will be cleared.

2. Copy algorithm

Divide the memory into two pieces of equal size. When the object is unreachable, it does not clean up in time, but waits for the memory in use

Guess you like

Origin blog.csdn.net/weixin_44077556/article/details/107405999