Summary of Questions and Answers for 408 Computer Re-examination Professional Courses

first part

  1. bucket sort
  2. It is also necessary to simplify the code and improve the operating efficiency. How do you optimize it in C language and C++ language? This should not be difficult for students who have participated in algorithm competitions. In C, for example, change recursion to iteration, and reduce unnecessary loop times by setting judgment variables. In C++, for example, use reference passing instead of value passing
  3. Greedy algorithm, what is the principle?
    Using a greedy algorithm requires two properties: greedy selectivity and optimal substructure. Greedy selectivity means that the overall optimal solution of the desired problem can be achieved through a series of local optimal choices. He always makes the best choice at present, which can depend on previous choices, but never depends on future choices and sub-problem choices, which is an important difference between him and dynamic programming. Optimal substructure means that the optimal solution of a problem contains the optimal solution of its substructure. When proving these two properties, it is better to prove the optimal substructure. The optimal substructure is generally proved by the method of contradiction, that is, if the optimal solution of the problem is not the optimal solution of the subproblem, then it is assumed that the subproblem has an optimal solution, and then an optimal solution of the original problem is constructed from the optimal solution. solution, a contradiction arises. It should be noted that the subproblem refers to the subproblem after the greedy choice is made. The proof of greedy selectivity generally assumes the optimal solution of a problem first, and constructs another optimal solution based on this, so that the first step is greedy selection
  4. What is a priority array?
    Each run queue has two priority arrays, one active and one expired. The priority array is defined in kernel/sched.c, which is a structure of type prio_array. The priority array is a data structure that can provide O(1) level algorithm complexity. The priority array allows each priority of the runnable processor to contain a corresponding queue, and these queues contain a linked list of executable processes at the corresponding priority. The priority array also has a priority bitmap, which can help improve efficiency when it is necessary to find the executable process with the highest priority in the current system.
  5. What are the three indicators of OS
    throughput, response speed, resource utilization
  6. What is the most commonly used embedded operating system
    Linux Android
  7. Process characteristics: dynamic, concurrency, independence, asynchronous, structural [program segment, data segment, process control block PCB ()]
  8. Each process has independent code and data space (program context), switching between programs will have a large overhead; threads can be regarded as lightweight processes, and the same type of thread shares code and data space, each thread Each has its own independent running stack and program counter (PC), and the overhead of switching between threads is small.
  9. The resources shared exclusively by threads of the same process are:
    a. The stack is
    exclusive
    ; In fact, what is stored in the thread is a copy, including the program counter PC
  10. Which process scheduling algorithm is mainly used in real-time systems?
    Preemptive Priority Scheduling
  11. Necessary conditions for deadlock
    Mutual exclusion, non-preemption, request and hold, circular wait
  12. The BIOS does a good job of inspection and initialization, finds the MBR, the boot program in the MBR checks the partition table and finds the primary partition, and the system boot program of the primary partition then boots and loads the system.
  13. Immediate addressing is to directly give the operand, and direct addressing is to give the address
  14. bus cycle. Usually, the time required for the CPU to
    access the outside of the microprocessor (memory or I/O port) through the bus is called a bus cycle.
  15. machine cycle. In the computer, in order to facilitate management, the execution process of an instruction is often divided into several stages, and each stage completes a job. For example, instruction fetch, memory read, memory write, etc., each of these tasks is called a basic operation. The time required to complete a basic operation is called a machine cycle.
  16. What is CDN?
    The full name of CDN is Content Delivery Network, that is, content distribution network. CDN is a content distribution network built on the Internet. Relying on the edge servers deployed in various places, through the load balancing, content distribution, scheduling and other functional modules of the central platform, users can obtain the required content nearby, reduce network congestion, and improve user access. Response speed and hit rate. The key technologies of CDN mainly include content storage and distribution technologies.
  17. What is the difference between JSP and html?
    1. HTML can be opened directly, and jsp can only be opened by publishing it to servers such as Tomact.
    2. By definition, HTML pages are static pages that can be run directly. JSP pages are dynamic pages that need to be converted into servlets when running.
    3. Their headers are different. This is the header of JSP "<%@ page
    language="java" import= "java.util.*"
    pageEncoding="gbk"%>" has encoding format and import package in the header.
    4. Use <%%> in jsp to write Java code, but html does not have <%%>
    2. You cannot write java in html. Reason: JSP pages are dynamic pages, HTML is static. So it is not supported.
    Definition:
    1. JSP: The full name is Java Server Pages, and the Chinese name is java server page. It is basically a simplified Servlet design. It is a dynamic web technology
    standard initiated by Sun Microsystems and participated by many companies.
    2. HTML: Hypertext Markup Language is an application under the Standard Universal Markup Language, and it is also a specification, a standard, which marks various parts of the web page to be displayed through markup symbols.
    Extended information:
    The connection between jsp and servlet:
    JSP is an extension of Servlet technology, which is essentially a simple way of Servlet. JSP compiled is "servlet-like".
    The main difference between Servlet and JSP is: Servlet's application logic is in the Java file and completely separated from the HTML in the presentation layer. In the case of JSP, Java and HTML can be combined into one file with a .jsp extension.
    JSP focuses on the view, and Servlet is mainly used to control logic. Servlet is more like a Controller for control.
  18. There are six object-oriented principles: Open-Closed Principle (OCP), Li's Substitution Principle
    (LSP), Dependency Injection Principle (DIP), Interface Segregation (ISP), Single Responsibility (SRP), and
    Demeter's Law (LKP).
  19. Where is the memory allocated by the malloc function?
    In C language, the memory space is divided into three areas according to the time (life cycle) of the data in the memory:
    1) Program area: used to store the code of the program, that is, the binary code of the program.
    2) Static storage area: used to store global variables and static variables, the space for these variables has been allocated when the program is compiled.
    3) Dynamic storage area: memory allocated during program execution, divided into: heap area (heap) and stack area (stack). Heap area: used for dynamic memory allocation, the memory allocation function allocates memory on the heap when the program is running. In the C language, only pointers can be used to dynamically allocate memory. Stack area: When the function is executed, the memory area of ​​the storage unit of local variables and function parameters inside the function. When the function runs, these memory areas will be released automatically.
  20. Java development patterns
    Generally speaking, design patterns are divided into three categories:
    creational patterns, a total of five types: factory method pattern, abstract factory pattern, singleton pattern, builder pattern, and prototype pattern.
    There are seven structural modes: Adapter mode, Decorator mode, Proxy mode, Appearance mode, Bridge mode, Composition mode, Flyweight mode.
    There are eleven types of behavioral patterns: strategy pattern, template method pattern, observer pattern, iterative sub-pattern, chain of responsibility pattern, command pattern, memo pattern, state pattern, visitor pattern, mediator pattern, and interpreter pattern.
    In fact, there are two types: concurrent mode and thread pool mode.
    Second, the six principles of design patterns
    1, the principle of opening and closing (Open Close Principle)
    The principle of opening and closing means that it is open to expansion and closed to modification. When the program needs to be expanded, the original code cannot be modified to achieve a hot-swap effect. So in one sentence, it is: In order to make the program scalable and easy to maintain and upgrade. To achieve this effect, we need to use interfaces and abstract classes, which we will mention in the specific design later.
    2. Liskov Substitution Principle (Liskov Substitution Principle)
    is one of the basic principles of object-oriented design. According to the Liskov Substitution Principle, wherever a base class can appear, a subclass must appear. LSP is the cornerstone of inheritance reuse. Only when the derived class can replace the base class and the functions of the software unit are not affected, the base class can be truly reused, and the derived class can also add new behaviors on the basis of the base class. . The Liskov substitution principle is a supplement to the "open-closed" principle. The key step to realize the "open-close" principle is abstraction. The inheritance relationship between the base class and the subclass is the concrete realization of abstraction, so the Liskov substitution principle is the specification of the concrete steps to realize the abstraction.
    —— From Baidu Encyclopedia
    3. Dependence Inversion Principle (Dependence Inversion Principle)
    This is the basis of the opening and closing principle. The specific content: real interface programming depends on abstraction rather than concreteness.
    4. Interface Segregation Principle (Interface Segregation Principle)
    This principle means: using multiple isolated interfaces is better than using a single interface. It also means to reduce the degree of coupling between classes. From here we can see that the design pattern is actually a software design idea, starting from a large software architecture, for the convenience of upgrading and maintenance. So it appears many times above: reduce dependency and reduce coupling.
    5. Demeter Principle (Demeter Principle)
    Why is it called the principle of least knowledge, that is to say: an entity should interact with other entities as little as possible, so that the system function modules are relatively independent.
    6. The principle of composite reuse (Composite Reuse Principle)
    is to try to use synthesis/aggregation instead of inheritance.
    one. Factory mode [common factory mode,]
    Multiple classes implement an abstract interface, and a factory returns a specific instance according to conditions.
    Singleton mode
    Database Connection has only one adapter mode. The adapter mode converts the interface of a certain class into another interface representation that the client expects, with the purpose of eliminating the compatibility problem of the class caused by the interface mismatch
  21. RUP (Rational Unified Process), unified software development process, unified software process is an object-oriented and network-based program development methodology. Software Unified Process (RUP) is a software engineering method created by Rational Software Corporation (Rational Corporation was acquired by IBM) [1]. RUP describes how to effectively use commercial and reliable methods to develop and deploy software. It is a heavyweight process (also known as thick methodology), so it is especially suitable for large software teams to develop large projects.
    RUP has three most important features: 1) Software development is an iterative process, 2) Software development is driven by Use Case, 3) Software development is centered on Architectural Design.
  22. Agile software development (English: Agile software development), also known as agile development, is a new software development method that has gradually attracted widespread attention since the 1990s. It is a software development capability that can respond to rapidly changing requirements. Their specific names, concepts, processes, and terms are different. Compared with "non-agile", they emphasize close collaboration between programmer teams and business experts, face-to-face communication (considered more effective than written documents), frequent Delivering new software releases, compact and self-organizing teams, coding and team organization methods that adapt well to changes in requirements, and a greater focus on the role of people in the software development process. Agile development is a new software development method that has gradually attracted widespread attention since the 1990s. It is a software development capability that can respond to rapidly changing requirements. Their specific names, concepts, processes, and terms are different. Compared with "non-agile", they emphasize close collaboration between programmer teams and business experts, face-to-face communication (considered more effective than written documents), frequent Delivering new software releases, compact and self-organizing teams, coding and team organization methods that adapt well to changing requirements, and a greater focus on the human role in software development. Agile software development describes a set of values ​​and principles for software development in which requirements and solutions are developed through self-organizing cross-functional teams. Agile software development advocates moderate planning, evolutionary development, early delivery and continuous improvement, and encourages rapid and flexible development and change. These principles underpin
    the definition and continual evolution of many software development methodologies. [1]
    Extreme Programming (Extreme Programming, XP for short) was proposed by Kent Beck in 1996. It is a software engineering methodology and one of the most productive methodologies in agile software development. Like other agile methodologies, Extreme Programming is fundamentally different from traditional methodologies in that it emphasizes adaptability rather than predictability. Extreme Programming is a lightweight, nimble approach to software development; at the same time it is a very rigorous and thoughtful approach. Its foundations and values ​​are communication, simplicity, feedback, and courage; that is, any software project can be improved from four aspects: strengthen communication; start simple; seek feedback; and seek truth from facts.
    XP is a near-spiral development method, which decomposes the complex development process into relatively simple small cycles; through active communication, feedback and a series of other methods, developers and customers can be very clear about the development progress , changes, problems to be solved and potential difficulties, etc., and adjust the development process in a timely manner according to the actual situation.
    The main goal of Extreme Programming is to reduce the cost of changing requirements. In the traditional system development method, system requirements are determined at the beginning of project development and remain unchanged in the subsequent development process. This means that changes in requirements (and such changes in requirements are unavoidable in some extremely fast-growing fields) when the project development enters a later stage will lead to a rapid increase in development costs. Extreme Programming achieves the purpose of reducing the cost of change by introducing concepts such as basic values, principles, and methods. A system development project using extreme programming method will be more flexible in response to changes in requirements.
  23. There are 6 database design phases
    : system requirements analysis phase;
    conceptual structure design phase;
    logical structure design phase;
    physical structure design phase;
    database implementation phase;
    database operation and maintenance phase;
  24. Cursor: It is to effectively process the query result set as a unit. The cursor can be positioned on a specific row in the cell to retrieve one or more rows from the current row in the result set. Can modify the current row of the result set. Cursors are generally not used, but they are very important when data needs to be processed one by one.
  25. acid What is
    ACID, which refers to the abbreviation of the four basic elements for the correct execution of database transactions. Including:
    Atomicity All operations in the entire transaction are either completed or not completed, and it is impossible to stagnate in a certain link in the middle. If an error occurs during the execution of the transaction, it will be rolled back (Rollback) to the state before the transaction started, as if the transaction had never been executed.
    Consistency The transaction must always keep the system in a consistent state.
    Isolation Makes only one request for the same data at the same time.
    Persistence (Durability) After the transaction is completed, the changes made by the transaction to the database will be permanently saved in the database and will not be rolled back. A database that supports transactions (Transaction) must have these four characteristics, otherwise the correctness of the data cannot be guaranteed in the transaction process (Transaction processing), and the transaction process is likely to fail to meet the requirements of the transaction party
  26. Discrete: What is a partially ordered set
    Discrete: Conceptual problems such as group and cluster
    How to solve 12306 simultaneous booking
    How to solve the problem of server actively accessing the client
    If the telecom server crashes, what should
    I do to use his related website? Slow, in addition to increasing the bandwidth
    You talk about software engineering and computer technology, what is the difference between these two majors?
  27. What maven is used for specifically, I said version management, local warehouse shelf package dependencies, etc.
    Maven is a construction tool, service and construction. After using Maven to configure the project, enter simple commands, such as: mvn clean install, Maven Will help us deal with those tedious tasks. Maven is cross-platform. Maven maximizes the elimination of construction duplication. Maven can help us standardize the construction process. All projects are simple and consistent, simplifying the cost of learning. In short, Maven As a construction tool, it not only helps us automate the construction, but also abstracts the construction process and provides construction task implementation. It is cross-platform and provides a consistent operation interface externally, which is enough to make it an excellent and popular construction tool. But Maven not only It is a build tool, and it is also a dependency management tool and project information management tool. It also provides a central warehouse, which can help us download components automatically. Using Maven can also enjoy an additional benefit, that is, Maven can name the project directory structure and test cases There are established rules for methods and other content. As long as these mature rules are followed, users will avoid additional learning costs when switching between projects. It can be said that convention is better than configuration (Convention Over Configuration).
  28. I asked springMVC again, what does MVC mean? I said Model model layer, Viewer view layer, Controller control layer, and talked about the specific process. The full name of MVC is Model View Controller, which is model (model)-view ( view)—short for controller, a software design paradigm. It uses a method of separating business logic, data, and interface display to organize code, and gathers many business logics into one component. When it is necessary to improve and customize the interface and user interaction, it does not need to rewrite the business logic. , to reduce the encoding time. V stands for View View refers to the interface that the user sees and interacts with. For example, a web page interface composed of html elements, or a software client interface. One of the benefits of MVC is that it can handle many different views for an application. No real processing happens in the view, it's just a way of outputting data and allowing the user to manipulate it. M stands for model. The model means that the model expresses business rules. Of the three components of MVC, the model has the most processing tasks. The data returned by the model is neutral, and the model has nothing to do with the data format. Such a model can provide data for multiple views. Since the code applied to the model can be reused by multiple views, it reduces code duplication. sex. C is the controller controller means that the controller accepts user input and calls the model and view to complete the user's needs. The controller itself does not output anything or do any processing. It just receives the request and decides which model component to call to process the request, and then decides which view to use to display the returned data. The most typical MVC is jsp+servlet+javabean mode. As a model, JavaBean can be used not only as a data model to encapsulate business data, but also as a business logic model to contain the business operations of the application. Among them, the data model
    is used to store or transmit business data, and the business logic model executes specific business logic processing after receiving the model update request from the controller, and then returns the corresponding execution result.
    As the presentation layer, JSP is responsible for providing pages to display data for users, providing corresponding forms (Form) for user requests, and sending requests to the controller at appropriate times (clicking buttons) to request model updates. Serlvet, as a controller, is used to receive the request submitted by the user, then obtain the data in the request, convert it into the data model required by the business model, and then call the corresponding business method of the business model to update, and at the same time select the desired data according to the business execution result The returned view.
    Advantages of MVC:
    1. Low coupling and
    separation of view layer and business layer, which allows changing the view layer code without recompiling the model and controller code. Similarly, changes in the business process or business rules of an application only need to change the MVC model layer. Because the model is separated from the controller and view, it is easy to change the data layer and business rules of the application.
    2. High reusability
    The MVC pattern allows the use of various styles of views to access the same server-side code, because multiple views can share a model, which includes any WEB (HTTP) browser or wireless browser (wap), such as , users can order a certain product through a computer or a mobile phone. Although the ordering method is different, the way to process the ordered product is the same. Since the data returned by the model is not formatted
    , the same component can be used by different interfaces.
    3. Fast deployment, low life cycle cost
    MVC reduces the technical content of developing and maintaining user interface. Using the MVC pattern reduces development time considerably, it allows programmers (Java developers) to focus on business logic, and interface programmers (HTML and JSP developers) to focus on presentation.
    4. High maintainability
    The separation of view layer and business logic layer also makes WEB applications easier to maintain and modify.
    Disadvantages of MVC:
    1. It is more complicated to fully understand MVC.
    Since the MVC model has not been proposed for a long time, and the students have insufficient practical experience, it is not an easy process to fully understand and master MVC.
    2. Debugging is difficult.
    Because the model and view should be strictly separated, it also brings some difficulties to debug the application, and each component needs to be thoroughly tested before it can be used.
    3. Not suitable for small and medium-sized applications
    In a small and medium-sized application, mandatory use of MVC for development often takes a lot of time, does not reflect the advantages of MVC, and makes development cumbersome.
    4. Increase the complexity of the system structure and implementation
    For a simple interface, strictly follow MVC to separate the model, view and controller, which will increase the complexity of the structure, and may generate too many update operations, reducing operating efficiency.
    5. The too tight connection between the view and the controller reduces the view’s access to model data. The view and the controller are separated from each other, but they are closely related components. The view has no controller, and its application is very limited. , and vice versa, which prevents their independent reuse.
    Depending on the model operation interface, the view may need multiple calls to obtain enough display data. Unnecessarily frequent access to unchanged data will also impair operational performance
  29. How does the server achieve load balancing
  30. Database, the transformation of ER model into relational model is the first stage of database design.
    The fourth step
    The first step, planning. The main task of the planning stage is to analyze the necessity and feasibility of establishing a database. Such as system survey (that is, a comprehensive survey of the enterprise, drawing an organizational hierarchy diagram to understand the organizational structure of the enterprise), feasibility analysis, determining the overall goal of the DBS (database system) and formulating a project development plan.
    The second step is needs analysis. In the requirements analysis stage, a comprehensive and detailed investigation should be conducted on the entire application of the system, the goals of the enterprise organization should be determined, the basic data supporting the overall design goals of the system and the requirements for these data should be collected, the needs of users should be determined, and these requirements Write a demand analysis report that both users and database designers can accept. As long as there is work at this stage, user activities are analyzed to generate business flow charts; system scope is determined to generate system scope diagrams; data involved in user activities is analyzed to generate data flow charts; system data is analyzed to generate data dictionaries.
    The third step is conceptual design. The goal of conceptual design is to generate a conceptual database structure that reflects the information needs of an enterprise organization, that is, to design a conceptual model that is independent of computer hardware and DBMS (database management system). The ER model is the main design tool.
    The fourth step is logical structure design. Its purpose is to transform the global ER schema designed in the conceptual design stage into a logical structure (including database schema and external schema) that conforms to the data model supported by the DBMS on the selected specific machine.
    The fifth step is the physical design of the database. The process of selecting a physical structure that is most suitable for the application environment for a given data model. The physical structure of the database mainly refers to the storage record format, storage record arrangement and access method of the database, which completely depends on the given hardware environment Hull database products.
    The sixth step is the implementation of the database. There are three main tasks in this stage: 1. Establish the actual database structure. 2. Load the test data to debug the application program. 3. Load the actual data and enter the trial operation state.
    The seventh step is the operation and maintenance of the database. The official operation of the database system marks the end of the database design and application development work and the beginning of the maintenance phase. There are 4 tasks in this phase: 1. Maintain the security and integrity of the database; 2. Monitor and improve the performance of the database; 3. Expand the existing functions of the database 4 Correct the system errors found during operation in time
  31. Database, what are the types of data models, and name at least two characteristics
    1. Non-relational model
    � Hierarchical model: The connection between records is realized through pointers, and the search efficiency is high.
    � Network model: A node can have more than one parent, allowing more than one node to have no parents.
    2. Relational model: simple concept, clear structure, easy to learn and use for users
    3. Object-oriented model
    4. Object-relational model
  32. How are database tables classified?
  33. How to solve the problem of data redundancy in the database
  34. How do two devices with asynchronous clocks communicate?
    Asynchronous communication
    The response methods of asynchronous communication can be divided into
    1. Non-interlocking: After the master module sends a request signal, it does not have to wait for the response signal from the slave module, but after a period of time, confirm After the slave module has received the request signal, it cancels its request for a new signal.
    2. Semi-interlock: The master module sends out a request signal, and must wait for the response signal from the slave module before canceling its request signal.
    3. Fully interlocked: the master module sends out a request signal, and must wait for the response from the slave module before canceling its request signal; after the slave module sends out a reply signal, it must know that the master module’s request signal has been cancelled, and then cancel its reply signal
  35. What is a baseband signal? What is a broadband signal?
    Baseband signal: The digital signal 1 or 0 is directly represented by different voltages, and then sent to the circuit for transmission.
    Broadband signal: Frequency division multiplexing analog signal formed by modulating the baseband signal. As the baseband signal is modulated, its spectrum shifts to higher frequencies. Since the spectrum of each baseband signal is moved to a different frequency band, they will not interfere with each other when combined, so that multiple digital signals can be transmitted in one cable, thus improving the utilization of the line
  36. 6. The difference between java and C++. He said that he wanted to ask about the address.
    1. Pointer
    JAVA language makes programmers unable to find pointers to directly access memory without pointers, and adds automatic memory management functions, thus effectively preventing pointer operation errors in c/c++ language, such as system crashes caused by wild pointers. But it does not mean that JAVA does not have pointers, pointers are still used inside the virtual machine, but outsiders are not allowed to use them. This is beneficial to the security of Java programs.
    2. Multiple Inheritance
    C++ supports multiple inheritance, which is a feature of C++ that allows multiple parent classes to derive a class. Although the function of multiple inheritance is very strong, it is complicated to use, and it will cause a lot of troubles, and it is not easy to compile the program to realize it. Java does not support multiple inheritance, but allows a class to inherit multiple interfaces (extends+implement), realizes the function of multiple inheritance in C++, and avoids many inconveniences caused by the implementation of multiple inheritance in C++.
    3. Data Types and Classes
    Java is a fully object-oriented language, and all functions and variables must be part of a class. In addition to the basic data types, the rest are treated as class objects, including arrays. Objects combine data and methods and encapsulate them in classes so that each object can implement its own characteristics and behavior. C++ allows functions and variables to be defined as global. In addition, the structures and associations in c/c++ are canceled in Java, eliminating unnecessary troubles.
    4. Automatic memory management
    All objects in a Java program are built on the memory stack with the new operator, which is similar to the new operator of C++. The following statement creates an object of class Read, and then calls the work method of the object:
    Read r= new Read();
    r.work();
    The statement Read r = new Read(); an instance of Read is established on the stack structure. Java automatically performs useless memory recovery operations, and does not require programmers to delete them. However, in C+10, the program must release memory resources, which increases the programmer's burden. When an object is no longer used in Java, the garbage collector will mark it for deletion. The useless memory recovery program in JAVA runs in the background in thread mode, and works in idle time.
    5. Operator Overloading
    Java does not support operator overloading. Operator overloading is considered to be a prominent feature of C#10. Although classes can generally achieve such functions in Java, the convenience of operator overloading is still lost a lot. The Java language does not support operator overloading in order to keep the Java language as simple as possible.
    6. Preprocessing features
    Java does not support preprocessing features. C/C++ has a pre-editing stage in the compilation process, which is the well-known pre-processor. The preprocessor provides convenience for developers, but increases the complexity of compiling. The JAVA virtual machine does not have a preprocessor, but the import statement (import) it provides is similar to the function of the c ten preprocessor.
    7. Java does not support default function parameters, but c++ supports them.
    In c, codes are organized in functions, and functions can access the global variables of the program. C10 adds classes and provides class algorithms, which are functions connected to classes. C10 class methods are very similar to Java class methods. However, since C10 still supports C, it cannot be prevented from developing C10. People use functions, resulting in mixed use of functions and methods to make the program more confusing.
    Java has no functions. As a purer object-oriented language than C++, Java forces developers to include all routines in classes. In fact, implementing routines with methods encourages developers to better organize their code. .
    8 strings
    c and c10 do not support string variables. Null terminators are used in c and c11 programs to represent the end of strings. In Java, strings are implemented with class objects (strinR and stringBuffer). These class objects It is the core of the Java language. Implementing strings with class objects has the following advantages:
    (1) The methods of creating strings and accessing string elements are consistent throughout the system;
    (2) J3 Yang string class is used as a Java Defined as part of the language, rather than as an additional extension;
    (3) Java strings perform runtime empty checking, which can help eliminate some runtime errors;
    (4) Strings can be concatenated with "ten" .
    9 "goto statement
    "terrible" goto statement is a "relic" of c and c++, it is a technically legal part of the language, referencing the goto statement causes confusion in the program structure, it is not easy to understand, the goto statement should be used for unconditional transfer Subroutine and multi-structure branch technology. In view of the general reason, Java does not provide goto statement. Although it specifies goto as a keyword, it does not support its use, so that the program is concise and easy to read. l0. Type conversion
    in
    c and c ten ten Sometimes there is an implicit conversion of data types, which involves the problem of automatic type conversion. For example, in c++, you can assign a floating point value to an integer variable and remove its mantissa. Java does not support c++ If necessary, the program must explicitly perform mandatory type conversion.
    11. Abnormal
    The exception mechanism in JAVA is used to capture exception events and enhance the system's fault tolerance.
    try{//Code that may generate exceptions
    }catch(exceptionType name){ //processing }


    Where exceptionType indicates the exception type. C++ does not have such a convenient mechanism
  37. There are three layers in the network with error checking, what method is used in each layer, and why this method is used.
    Data link layer: In the frames transmitted by the data link layer, the error detection technology of cyclic redundancy check (CRC) is widely used. In order to complete the correct transmission of frames between data links. Network layer: To check whether the header of the IP packet is correct, in order to complete the correct transmission of the end-to-end packet.
    Transport layer: The encoding mechanism and window mechanism are used to complete the correct transmission of end-to-end messages. To this end, it is necessary to complete functions such as packet sorting, flow control, error control, etc.
  38. The layered structure of the network, the data link layer and the network layer have a strong ability to detect errors.
    Strong data link layer
  39. What happened after the computer was turned on?
    The entire startup process of the computer is divided into four stages.
    1. The first stage: BIOS
    In the early 1970s, "read-only memory" (read-only memory, abbreviated as ROM) was invented. The boot program was flashed into the ROM chip. After the computer was powered on, the first thing to do was to read it. The program in this chip is called "Basic Input/Output System", or BIOS for short.
    1.1 Hardware self-test
    The BIOS program first checks whether the computer hardware can meet the basic conditions of operation, which is called "hardware self-test" (Power-On Self-Test), abbreviated as POST. If there is a problem with the hardware, the motherboard will beep with different meanings and the startup will be suspended. If there is no problem, the screen will display CPU, memory, hard disk and other information.
    1.2 Boot sequence
    After the hardware self-test is completed, the BIOS transfers control to the next stage of the boot process. At this time, the BIOS needs to know which device the "startup program of the next stage" is specifically stored in. That is to say, the BIOS needs to have a ranking of external storage devices, and the device in the front is the device that transfers the control right first. This sort is called "BootSequence".
    Open the BIOS operation interface, and there is an item in it that is "set the boot sequence".
    2. The second stage: the master boot record
    BIOS transfers the control right to the first storage device according to the "startup sequence". At this time, the computer reads the first sector of the device, that is, the first 512 bytes. If the last two bytes of these 512 bytes are 0x55 and 0xAA, it indicates that the device can be used for booting; if not, it indicates that the device cannot be used for booting, and control is then transferred to the "boot sequence" the next device in the . The first 512 bytes are called "Master boot record" (Master bootrecord, abbreviated as MBR).
    2.1 Structure of the Master Boot Record
    The "Master Boot Record" is only 512 bytes, so it can't hold too many things. Its main function is to tell the computer where to find the operating system on the hard disk.
    2.2 Partition Table
    There are many benefits of hard disk partitioning. Considering that each zone can have a different operating system installed, the "Master Boot Record" must therefore know which zone to transfer control to. The length of the partition table is only 64 bytes, and it is divided into four entries, each of which is 16 bytes. Therefore, a hard disk can only be divided into four primary partitions at most, also called "primary partitions".
    3. The third stage: hard disk startup
    At this time, the control of the computer will be transferred to a partition of the hard disk.
    4. The fourth stage:
    After the control of the operating system is transferred to the operating system, the kernel of the operating system is first loaded. Memory. So far, all the startup process is completed.
  40. How many types of data are related? Give an example of RAW? What technology can be used to solve data correlation, and what kind of correlation is specifically solved.
    The definition of correlation is for instructions, that is, the dependency relationship that exists between two instructions. There are three types of correlation: data correlation, name correlation, and control correlation
    Classification: read-after-write correlation RAW, write-after-write correlation WRR, write-after-write correlation WRW
    data correlation:
    read-after-write RAW correlation: corresponding to the case of true data correlation, The next instruction depends on the execution result of the previous instruction, and the result must be written before being read by the next instruction. If you read it first, you read the old value, which is not the correct input for the operation.
    Write-after-write WRW related: This corresponds to output related, for example, the final value to be output is the execution result of the next instruction, but this instruction is executed and written first, and then the write operation of the previous instruction is executed, corresponding to In terms of output results, there is a conflict.
    Read-after-write WAR correlation: This corresponds to the name correlation. Before reading a certain data, the object read by the instruction is first written into the operation result by the related instruction, which leads to the fact that the read data is not the original data, but The result of the next instruction run.
  41. Differences between VPN and NAT
    VPN and NAT are realized by rebuilding an IP header, but their implementations are different. VPN is to encrypt the internal IP datagram and package it into the data part of the external IP datagram. Its main purpose is to keep the data confidential, while NAT is purely for address translation. Its purpose is to solve the problem of internal network addressing locally. Communication problems with the external network.
  42. Name at least three types of databases (hierarchical, network, and relational) and briefly explain
    the hierarchical model: a tree structure is used to represent various entities and the relationships between entities, and there is only one node without a parent node (root node) , all others have one and only one parent node. The only thing that can be directly expressed is a one-to-many relationship.
    Advantages: high efficiency and clear structure, performance is better than relational database, not lower than mesh. Disadvantages: Many connections in the real world are not hierarchical, such as many-to-many connections between nodes, and the situation where a node has multiple parents is not easy to represent.
    Network model: For non-hierarchical relationships, it is not straightforward to represent non-tree structures in a hierarchical manner. The network model can be well represented. It allows more than one node without parents, and a node can also have multiple Parents, can describe the real world more directly.
    Advantages: more direct description of the real world, better performance, and higher access efficiency. Disadvantages: The complex structure is not conducive to mastering, and users have to understand the details of the system structure when programming, which increases the burden of programming.
    Relational model: Generally speaking, a relationship is a standardized two-dimensional table. Both entities and the links between entities are represented by relationships, and the results of data retrieval and update are also relationships.
    Advantages: The concept is single, the user is easy to understand and use, and the access path is transparent to the user, so it has higher data independence and security, and also simplifies the programmer's work. Disadvantages: The query efficiency is often not as good as the formatted data model. In order to improve performance, it increases the difficulty of developing DBMS. relational database
  43. Briefly describe the difference between a relation and a relational schema.
    The relationship is essentially a two-dimensional table, the relationship schema is a description of the relationship, and the relationship is the state or content of the relationship schema at a certain moment.
    Relational schemas are static and stable, while relationships are dynamic and change over time because relational operations are constantly updating data in the database.
    In layman's terms: the relationship is a two-dimensional table, the relationship schema is the description (header) of the table, the relationship name is the table name, the ancestor is a row, and the attribute is a column, which is a column value in a record.
  44. The integrity of the relationship (entity integrity, referential integrity, user-defined) and the constraint of the database primary key
    Entity integrity: the primary key of the relationship cannot take a null value, and if the primary key consists of several attributes, it cannot be empty. Entities are uniquely identified by primary codes.
    Referential integrity: the foreign key in a relation either takes null value (if the attribute group is all empty), or is equal to the primary key value of the relation it refers to.
    User-defined integrity: Constraints specific to relational databases.
  45. What are DDL, DML, DCL? (What kinds of database languages ​​are there?)
    Data Definition Language (DDL): Create, Drop, Alter
    Data Manipulation Language (DML): Insert, Update, Delete
    Data Control Language (DCL): Grant, Revoke
    Data Query Language: Select
  46. Which sorting algorithm do you think is the best?
    There is no best, only the most suitable. If n is small, it is better to use a simple sorting algorithm such as simple selection and direct insertion. If the initial state of the data is basically ordered by keyword, it is better to use direct insertion or bubbling. If n is large, you should consider using those algorithms with better time complexity. Quick sorting is considered to be the best method among comparison-based internal sorting algorithms. When the keywords to be sorted are randomly distributed, quick sorting Average time is shortest. If there is a limit to the negative auxiliary space, heap sorting can be considered. In addition, heap sorting is the most suitable for finding the largest numbers of large data. Merge sort can be considered if sorting is required to be stable. If n is large, the number of keywords recorded is less and can be decomposed, so it is better to use radix sorting. When the record itself has a large amount of information, in order to avoid movement, a linked list can be considered.
  47. How many ways are there for inter-process communication?
    Shared storage: There is a shared space that can be directly accessed between communication processes. Information exchange between processes is realized by reading and writing this space. When writing/reading operations on the shared space, you need to use a synchronous mutual exclusion tool .
    Message passing: Data exchange is based on formatted messages, and the message passing methods provided by the operating system include direct communication and indirect communication.
    Pipeline communication: A pipe is a shared file that connects a reading process and a writing process to realize communication between them. The sending process that provides input to the pipe sends a large amount of data into the pipe in the form of a character stream, and the process that accepts the output of the pipe, then from Data is accepted in the pipeline. In order to coordinate the communication between the two parties, the pipeline mechanism needs to provide mutual exclusion, synchronization and confirm the existence of the other party.
  48. Triple play, which triple play?
    Telecommunications network: The main business is telephone, fax, etc.
    Cable television network: A one-way television program transmission network.
    Computer network: We now use a lot of local area networks and the Internet.
  49. What is a baseband signal? What is a broadband signal? What is an analog signal? What is a digital signal?
    Baseband signal: The digital signal 1 or 0 is directly represented by different voltages, and then sent to the circuit for transmission.
    Broadband signal: Frequency division multiplexing analog signal formed by modulating the baseband signal. As the baseband signal is modulated, its spectrum shifts to higher frequencies. Since the spectrum of each baseband signal is moved to a different frequency band, they will not interfere with each other when combined, so that multiple digital signals can be transmitted in one cable, thus improving the utilization of the line
  50. The source and sink of asynchronous communication have no clock synchronization signal, how to solve this problem?
    Use Manchester or Differential Manchester. Briefly describe the principles of these two
  51. List the protocols of the data link layer, at least two
    HDLCs: It is a bit-oriented data link layer protocol that transmits data on a synchronous network. This protocol does not depend on any character code set, and data packets can be transmitted transparently. The 0-bit stuffing method for transparent transmission is easy to implement in hardware, full-duplex communication, and high data link transmission efficiency.
    PPP: It is a byte-oriented protocol that uses serial line communication. The protocol is applied to the link between two directly connected nodes. It is mainly used to establish a point-to-point connection to send data through dial-up or dedicated line. Internet users connect to ISP using PPP protocol. PPP protocol function frame delimitation, so that the receiving end can find the start and end positions of the frame, support multiple network layer protocols and error detection on the same physical link. The Ppp protocol encapsulates an IP datagram into a frame, which supports both asynchronous and synchronous bit-oriented streams. There is also a link control protocol LCP for establishing, configuring and testing data link connections, and a set of network control protocols NCP. Each of these protocols supports different network layer protocols.
  52. Reliable transmission at the data link layer has only timeout retransmission. Each byte in the data stream transmitted in the transport layer TCP connection is coded with a sequence number, and the confirmation number is the sequence number of the first byte of the data expected to receive the next segment of the other party. Timeout retransmission and redundant acknowledgment (ACK) can shorten the time for the sender to retransmit after receiving three redundant acknowledgments of the same segment, which is also called fast retransmission.
  53. A Brief Introduction IEEE 802.3 protocol
    is a bus-based local area network standard that describes the implementation method of the MAC sublayer of the physical layer and data link layer. Adopt CSMA/CD method to control bus access. Ethernet MAC frame, each network adapter has an address, physical address, MAC address is 6 bytes long, high 24-bit manufacturer code, low 24-bit manufacturer assigned by itself.
  54. A brief description of the P2P protocol.
    The idea of ​​P2P is that the transmission content in the entire network is no longer stored on the central server. Each node has the function of uploading and downloading at the same time. Each node accesses resources of other nodes as a client and provides resources to other nodes as a server. Access, currently more popular P2P applications such as PPLive.
    Advantages: Reduce the pressure on the server, eliminate the complete dependence on a certain server, and assign tasks to each node. The scalability is good. Traditional servers have response and bandwidth limitations, and can only accept a certain number of requests. The network is robust, and the failure of a single node will not affect other nodes.
    Disadvantages: It takes up too much memory resources of users, affects the speed of the whole machine, and also makes the network very congested.
  55. What is 386 protected mode?
    80386 has a working mode in which the 80386 processor can give full play to its high-performance characteristics. Features of protected mode: In protected mode, 80386 adopts a brand-new segmentation and paging memory management technology, which not only allows direct addressing of 4GB memory space, but also allows the use of virtual memory. Support multi-tasking working mode. The 0-3 level priority protection function can be used to realize the protection and isolation between programs and between users and operating systems, and provide optimized support for multi-tasking operating systems. The 8086 only supports single task. Also introduces the concept of priority. Each memory segment that stores programs and data is assigned a different priority. 0 is the highest and 3 is the lowest. In fact, it is the right to use processor resources for a certain task. Level 0 can use the entire processor resource. The core of the operating system is level 0. Level 1 is assigned to peripheral drivers and system services. To protect a subsystem, such as a database system. General users can only have 3 levels of rights, also known as user levels.
  56. The composition and addressing mode of the 8086 microprocessor, what segment registers are there? What is the role of
    the bus interface unit BIU: instruction queue buffer, address adder, 4 segment registers, 16-bit instruction pointer register and input and output control circuits. The BIU is the interface between the 8086 and external devices and handles all bus operations. Including providing 20-bit address bus, 16-bit data bus and all control bus signals, and generating 2-bit physical address, responsible for fetching instructions from the specified address of the memory, sending them to the instruction queue buffer for queuing, when operands are needed, also The BIU fetches it from the specified address of the internal memory and sends it to the EU for calculation. Executive unit EU: including 4 general-purpose registers, 4 special-purpose registers, identification registers, arithmetic logic unit and EU control circuit. The EU is responsible for the decoding and execution of instructions, and the arithmetic units specified by the instructions are all completed by the EU. In the instruction execution process, the instruction fetching and decoding execution stages are completed by different components. When the EU executes an instruction, the BIU can take out the next instruction and send it to the instruction flow queue for queuing. After an instruction is executed, it can immediately execute the startling instruction, reducing the CPU's waiting time for fetching instructions, and improving the CPU utilization and the operating speed of the entire microprocessor. Addressing modes: immediate addressing, direct addressing, register addressing, register indirect addressing, register relative addressing, base address change addressing, relative base address change addressing. CS code segment register, which stores the first address of the segment where the current program is located. DS Data segment register, which stores the first address of the data segment used by the current program. SS stack segment, which stores the first address of the stack segment used by the current program. ES Additional segment register, which stores the first address of the segment where the additional data resides.
  57. What specific work does call and return do
    call: parameter push, return address push, protect the scene
    return: return address, restore the scene
  58. What is a chipset? What is the use?
    Chipset (Chipset) is the core of the motherboard circuit. In a sense, it determines the level and grade of the motherboard. It is the collective name of "South Bridge" and "North Bridge", which is a chipset that integrates the previous complex circuits and components to the maximum extent in several chips. The chipset is the nerve of the whole body. The chipset almost determines the function of this motherboard, which in turn affects the performance of the entire computer system. The chipset is the soul of the motherboard. The performance of the chipset determines the performance and level of the motherboard. This is because there are many models and types of CPUs with different functions and features. If the chipset cannot work well with the CPU, it will seriously affect the overall performance of the computer or even fail to work normally. The chipset of the motherboard determines almost all the functions of the motherboard.
    North Bridge Chip: It is the most important component that plays a leading role in the motherboard chipset, also known as the main bridge. The north bridge chip is the chip closest to the CPU on the motherboard. This is mainly because the communication between the north bridge chip and the processor is the closest, and the transmission distance is shortened in order to improve communication performance. Because the data processing capacity of the North Bridge chip is very large, the heat generated is also increasing, so the current North Bridge chips are covered with heat sinks or fans to enhance the heat dissipation of the North Bridge chip. Provide support for CPU type and main frequency, system cache support, motherboard system bus frequency, memory management (memory type, capacity and performance), graphics card slot specification, ISA/PCI/AGP slot, ECC error correction, etc. Support; south bridge chip: generally located on the motherboard far away from the CPU slot, near the PCI slot, this layout is to consider that it is connected to more I/O buses, and a little farther away from the processor is conducive to wiring . Compared with the north bridge chip, its data processing capacity is not too large, so the south bridge chip generally does not cover the heat sink. It provides support for I/O, providing KBC (keyboard controller), RTC (real-time clock controller), USB (universal serial bus), Ultra DMA/33(66) EIDE data transmission methods and ACPI (advanced Energy management) and other support, and determine the type and quantity of expansion slots, the type and quantity of expansion interfaces
    (such as USB2.0/1.
  59. From an abstract point of view, what is class inheritance and generalization, combination, and aggregation?
    Inheritance refers to a class (called a subclass, subinterface) inheriting the function of another class (called a parent class, parent interface). And the ability to add its own new functions can generalize to express the inheritance relationship between classes, the inheritance relationship between interfaces, or the implementation relationship between classes and interfaces.
    The generalized relationship is from subclass to superclass, as opposed to inherited or implemented methods. One type of relationship is a strong relationship. Aggregation is the relationship between the whole and the individual. Aggregation relationships are also implemented through instance variables. For example, car, engine, and tire. A car object consists of an engine object and four tire objects.
    The combination relationship is also a kind of association relationship, which is a stronger relationship than the aggregation relationship. Composite relationships are not shareable. For example, a person has four limbs, a head, etc. Represents the relationship between the whole and part of the class, and the part and the whole in the composition relationship have a unified lifetime. Once the whole object doesn't exist, the part object doesn't exist either. There is a symbiotic relationship between some objects and the whole object.
    Realization is the description and relationship between entities
  60. The MVC model
    MVC is a framework pattern that enforces separation of application input, processing, and output. Using MVC applications are divided into three core components: Model, View, Controller. They each handle their own tasks. A view is the interface that the user sees and interacts with. Models represent enterprise data and business rules. The controller accepts the user's input and calls the model and view to complete the user's needs,
  61. What is the difference between computer organization principles and computer architecture
    ? Computer architecture refers to those properties of a computer system that are seen by programmers. These properties are usually seen by programmers of machine language programming and assembly language programmers and assembler programmers. (All three are familiar with the traditional machine M1 in computer systems). These attributes include instruction set, memory addressing technology, I/O mechanism, etc. Computer composition refers to how to implement the attributes embodied in the computer architecture. In short, it is the realization of the properties contained in computer architecture.
    The relationship between the two: There are two machines with the same architecture but different components. That is, structure and composition form a one-to-many relationship.
  62. Computer system hierarchy
    1) Microprogram machine layer: hardware layer, the machine hardware directly executes micro instructions
    2) Traditional machine layer: machine layer, machine instruction system is interpreted by microprograms
    3) Operating system layer
    4) Assembly language layer
    5) Advanced Language layer
    Microprogrammed machines and traditional machines are physical machines, others are virtual machines
  63. DRAM and Refresh Strategy
    1. Concentrated Refresh: Refresh within a fixed period of time within a refresh cycle
    2. Decentralized Refresh: Disperse the refresh of each pair of rows in each working cycle
    3. Asynchronous Refresh: The maximum refresh interval is 2ms Divided by the number of rows, a refresh request is generated every corresponding period of time.
  64. Function and classification of the controller
    Function: 1) Take out an instruction from the main memory, and point out the location of the next instruction in the main memory
    2) Decode or test the instruction, and generate corresponding operation control signals to start the specified Action
    3) Command and control the direction of data flow between CPU, main memory, input and output devices Classification: hardwired controller, microprogram controller
  65. CPU control mode
    1) Synchronous control mode: the system has a unified clock, and all control signals come from this unified clock signal. The control circuit is simple, but the operation speed is slow.
    2) Asynchronous control mode: There is no reference time scale signal, each component works at its own inherent speed, and communicates through the response mode. The running speed is fast, but the control circuit is more complicated.
    3) Joint control method: Synchronous and asynchronous are combined, this method implements most of the micro-operations of various instructions synchronously and a small part of asynchronously.
  66. Microinstruction encoding method:
    1) Direct encoding method: simple, intuitive, fast execution speed, and good operation parallelism; the disadvantage is that the word length of the microinstruction is too long, resulting in a large control memory capacity.
    2) Field direct encoding method: the length of the microinstruction is shortened, but the execution speed is slightly slower.
    3) Field indirect encoding method: further shortens the microinstruction word length, but weakens the parallel control capability.
  67. The format of microinstructions: 1) Horizontal microinstructions: the advantage is that the microprogram is short and the execution speed is fast; the disadvantage is that the microinstruction is long and it is troublesome to write the microprogram.
    2) Vertical microinstructions: the advantage is that the microinstructions are short, simple, and regular, and it is easy to write microprograms; the disadvantage is that the microprograms are long and the execution speed is slow.
    3) Hybrid Microinstructions
  68. Bus basic knowledge
    Bus characteristics: time sharing, sharing
    Bus characteristics: mechanical characteristics, electrical characteristics, functional characteristics, time characteristics
    Bus classification:
    1) On-chip bus
    2) System bus: data bus, address bus, control bus
    3) Communication bus
    Bus arbitration:
    1) Centralized arbitration: chain query mode, counter timing query mode, independent request mode 2)
    Distributed arbitration
    Bus cycle: application allocation phase, addressing phase, transmission phase, end phase
  69. Functions of the I/O interface:
    1) Realize the communication control between the host computer and the peripheral equipment.
    2) Address decoding and device selection
    3) Data buffering
    4) Signal format conversion
    5) Transmission of control commands and status information
  70. protected mode and real mode?
    Protected mode:
    32-bit segment address and offset are used for addressing, the maximum addressing space is 4GB, and the maximum segment is 4GB (64GB for PentiumPre and later). In protected mode, the CPU can enter the virtual 806 mode, which is the real mode program running environment in protected mode. The operation of the program in protected mode: protected mode—from the program running, no matter the real mode or the protected mode, the fundamental problem is how the program runs in it.
    Therefore, we should always think about this issue when learning protection mode. Like in real mode, the essence of program running in protected mode is still "CPU executes instructions and manipulates related data", so various code segments, data segments, stack segments, and interrupt service routines in real mode still exist, and functions, The effect remains the same. So what is the biggest change in protected mode? The answer may vary from person to person, and my answer is that the "address translation method" has changed the most.
    Real mode:
    The real mode means that the addressing uses the same 16-bit segment address and offset as the 806, the maximum addressing space is 1MB, and the maximum segment is 64KB. 32-bit instructions can be used. The 32-bit x86 CPU is used as a high-speed 806. Program running in real mode: What is the essence of program running? In fact, it is very simple, that is, the execution of instructions. Obviously, the CPU is the hardware guarantee for the execution of instructions, so how does the CPU know where the instructions are? By the way, the 80x86 series uses The CS register cooperates with the IP register to inform the CPU of the location of the instruction in the memory. Program instructions generally need various data during execution. The 80x86 series has DS, ES, FS, GS, S, etc. to indicate data segments for different purposes. location in memory. The program may need to call the service subroutine of the system, and the 80x86 series uses the interrupt mechanism to realize the system service. In general, these are the main content required for a program to run in real mode (others such as jump, return, port operations, etc. are relatively minor.) The difference between the
    two:
    The fundamental difference between protected mode and real mode is whether process memory is protected or not. The difference in addressable space is just a consequence of this. The real mode regards the entire physical memory as a segmented area, the program code and data are located in different areas, the system program and the user program are not treated differently, and each pointer points to a "real" physical address. In this way, if a pointer of the user program points to the system program area or other user program areas and changes the value, the consequences for the modified system program or user program are likely to be disastrous. To overcome this poor way of managing memory, processor manufacturers have developed protected modes. In this way, the physical memory address cannot be directly accessed by the program, and the address (virtual address) inside the program must be converted into a physical address by the operating system to access, and the program knows nothing about it. At this point, the process (
    at this time we can call the program a process) has a strict boundary, and any other process cannot access the physical memory area that does not belong to itself; even within its own virtual address range, it cannot be accessed arbitrarily, because Some virtual areas have been put into some public system runtime libraries, and these areas cannot be modified at will. The CPU boot environment is 16-bit real mode, which can be switched to protected mode afterwards. But you cannot
    switch back to real mode from protected mode
  71. According to the two characteristics of mutual exclusion and locality of program execution, we allow only part of the job to be loaded when loading, and the other part to be placed on the disk, and then loaded into the main memory when needed. In this way, in a small The main memory space can run a job larger than it. At the same time, when users program, they also get rid of the restriction that they must write jobs that are smaller than the main memory capacity. In other words, the user's logical address space can be larger than the absolute address space of the main memory. To the user, it appears that the computer system has a large main memory called "virtual memory". The capacity of virtual memory is not unlimited, and the maximum capacity of virtual memory is determined by the address structure of the computer. And the virtual memory only expands the memory capacity logically, and its logical capacity is determined by the sum of the memory capacity and the external storage capacity. The address bus structure can determine the access range of main memory. So even if your disk is larger, if the address register accesses the main memory range is very small, the virtual memory capacity is still very small.
  72. What file distribution methods are available?
    1. Continuous allocation
    2. Link allocation
    3. Index allocation

the second part

  1. What are the methods for file storage space management?
    1. Free list method
    2. Group link method
    3. Bitmap method
    4. Free list method
  2. What are the methods of contiguous and non-contiguous allocation of memory?
    1. Contiguous Allocation
    a) Single Contiguous Allocation
    b) Fixed Partition Allocation - Internal Fragmentation
    c) Dynamic Allocation - External Fragmentation - First, Best, Worst, Neighbor Adaptation
    2. Non-Contiguous Allocation
    a) Paged Storage Management - - Page, address structure, page table, address conversion mechanism and conversion process, fast table b)
    Segment storage management - segment table, address conversion mechanism, segment sharing and protection
    c) Segment page storage management - segment table, page table
  3. What are the methods to find the interrupt source?
    There are two ways to find the interrupt source: query interrupt and vector interrupt.
    Query interrupt is to use the software query method, start the interrupt query program after the interrupt response, inquire which device's interrupt request trigger is 1 in turn, after detection, turn to the interrupt service program entry address preset by this device. This method is relatively simple, but it takes a lot of time, and there are few service opportunities for subsequent devices. Generally, the vector interrupt method is used in the 806 system.
    The vector interrupt is to gather the entry addresses (vector addresses) of the interrupt service routines of each device and place them in the interrupt vector table in turn. When the CPU responds to the interrupt, the control logic searches the interrupt vector table according to the interrupt type number provided by the peripheral, and then sends the entry address of the interrupt service program to the segment register and the instruction pointer register, and the CPU transfers to the interrupt service subroutine. This greatly speeds up the interrupt handling. Vector interrupt: It is named after the way the CPU obtains the entry address of the interrupt processing subroutine when the CPU responds to the interrupt. It is a method to find the source of the interrupt. It provides a vector pointing to the starting address of the interrupt handling subroutine.
    Interrupt vector: It is the starting address of the interrupt processing subroutine.
    Interrupt vector table: All vectors are placed in a certain area of ​​memory to form an interrupt vector table
  4. Briefly describe the process of open to open a file.
    First, the operating system searches the system file open table according to the file name a. The first case: if file a has already been opened, it allocates an entry for file a in the process file open table, and then Point the pointer of this entry to the entry corresponding to file a in the system file open table; then assign a file descriptor fd to the file in the PCB as a pointer to the process file open entry, and the file is opened.
    The second case: if the file a is not opened, check whether the directory entry containing the information of the file a is in the memory, if not, load the directory table into the memory as a cache; find the FCB according to the corresponding item of the file a in the directory table The location in the disk; load the FCB of file a into the Activeinode in memory; then add a new entry for file a in the system file open table, and point the pointer of the entry to the FCB of file a in ActiveInode; then Allocate a new entry in the file open table of the process, point the pointer of this entry to the entry corresponding to file a in the system file open table; then in the PCB, assign a file descriptor fd to file a as the process file Open the pointer of the table item, and the file is opened
  5. Commonly used storage protection methods
    1. Boundary register
    2. Upper and lower bound register method
    3. Base address, length limit register method
    4. Storage protection key: Assign a separate storage key to each storage block, which is equivalent to a lock.
  6. Switching technology, overlay technology, and the difference between the two.
    Overlay technology: Divide a large program into a series of overlays. Each overlay is a relatively independent program unit. When the program is executed, the overlays that are not required to be loaded into the memory at the same time are combined into a group and become an overlay segment. The overlay segment is allocated To the same storage area, this storage area becomes an overlay area, which corresponds to an overlay segment. The size of an overlay segment is determined by the largest overlay in the overlay segment. (In order to solve the problem of too small memory capacity, it breaks the limitation that all the information of a program must be loaded into the memory before it can run) Swap technology: move a temporarily unused program and data part from the memory to the external memory, so that Free up the necessary memory space; or read the specified program or data from the external memory into the corresponding memory, and give the control right to him to let it run on the system. A memory expansion technology. The mid-level scheduling of the processor is to use the switching technology.
    Differences:
    1. Compared with the coverage technology, the swap technology does not require the coverage structure between the program segments given by the programmer;
    2. The swap technology is mainly performed between processes and jobs, and the coverage technology is mainly in the same process or job 3.
    Covering technology can only cover program segments irrelevant to the covered program segment, and the exchange process consists of two processes: swap out and swap in.
  7. How is memory managed under Windows?
    Windows provides 3 methods for memory management: virtual memory, which is best for managing large objects or arrays of structures; memory-mapped files, which are best for managing large streams of data (usually from files) and running multiple Data is shared between processes; the memory stack is best suited for managing large numbers of small objects. Windows manipulates memory at two levels: physical memory and virtual memory. Among them, the physical memory is managed by the system, and applications are not allowed to directly access it. There is only one 2G address space visible to applications, and memory allocation is performed through the heap. Each process has its own default heap. When a heap is created, an address block of corresponding size is reserved through virtual memory operations (does not occupy actual memory, and the system consumes very little). When a block of memory is allocated on the heap, the system finds a free block in the address table of the heap (if not found, and the heap creation attribute is expandable, the heap size is expanded), and all the memory contained in this free block The page submits the physical object (on the physical memory or on the swap file of the hard disk), and this part of the address can be accessed at this time. When submitting, the system will allocate the memory of all processes uniformly. If the physical memory is not enough, the system will try to put some pages that some processes do not access temporarily into the swap file to free up part of the physical memory. When the memory is released, only the page where it is located is decommitted in the heap (the corresponding physical object is decommissioned), and the address space continues to be reserved. If you want to know whether a certain address is occupied
    /can be accessed, you only need to query the virtual memory status of this address. If it's a commit, it's accessible. If only reserved, or not reserved, a software exception is raised. In addition, some memory pages can have various attributes set. If it is read-only, writing to memory will also cause a software exception.
  8. Tell me about the methods you know to keep processes synchronized?
    The main methods of inter-process synchronization include atomic operations, semaphore mechanisms, spin locks, monitors, rendezvous, distributed systems, etc.
  9. Functions of computer network
    1) data communication
    2) resource sharing
    3) distributed processing
    4) improving reliability
    5) load balancing
  10. How does frequency division multiplexing avoid interference between signals?
    Answer: In order to prevent interference between sub-channels, a "guard band" needs to be added between adjacent channels. That is, the unoccupied narrow frequency band set aside at the upper and lower limits of a given channel. Its purpose is to ensure sufficient isolation between channels to prevent adjacent channel interference.
  11. Random Access Media Access Control
    All users can send information randomly, and the entire bandwidth is occupied when sending information. The winner obtains the channel through contention, thereby obtaining the right to send information.
    1) Pure ALOHA protocol
    does not listen to the channel, does not send according to time slots, and retransmits randomly
    How to detect conflicts: the receiver detects an error and does not confirm, and the sender does not receive confirmation within a certain period of time to judge the occurrence of conflicts
    How to resolve conflicts: timeout Then wait for a random time before retransmitting data
    2) The time slot ALOHA protocol
    divides the time into several identical time slices, and all users access the network channel synchronously at the beginning of the time slice. If there is a conflict, they must wait for the start of the next time slice Time to resend.
    3) CSMA protocol (Carrier Sense Multiple Access) (Carrier Sense Multiple Access) (Carrier Sense Multiple Access)
    1- Adhere to CSMA:
    idea: before sending a frame, monitor the channel first, and transmit directly when the channel is idle, without waiting, and keep monitoring when the channel is busy, until it is free immediately transmission. If there is a conflict, wait for a random long time to listen again, and repeat the above process.
    Advantages: As long as the media is free, the site will send it immediately, avoiding the loss of media utilization.
    Disadvantage: If two or more stations have data to send, conflicts are inevitable.
    Non-adhere to CSMA:
    Idea: Before sending a frame, listen to the channel first. If the channel is idle, it will be transmitted directly without waiting. If the channel is busy, wait for a random time before listening.
    4) Medium access control method: CSMA/CD, token bus and token ring
    5) Local area network classification: Ethernet, token ring network, FDDI network, ATM network, wireless LAN
    6) MAC sub-layer: Media access control sub-layer. The content related to accessing the transmission medium is placed in the MAC sub-layer. It shields various differences in access to the physical layer upwards and provides a unified access interface to the physical layer. The main functions include framing and dismantling, bit transmission error detection, and transparent transmission.
    7) LLC sublayer: The logical link control sublayer, which has nothing to do with the transmission medium, is responsible for identifying network layer protocols, and then encapsulating them to provide services for the network layer. He provides the network layer with four different types of connection services: connectionless without confirmation, connection-oriented, connectionless with confirmation, and high-speed transmission.
  12. PPP Protocol
    A byte-oriented point-to-point protocol
    1) PPP provides error detection but not error correction, and only guarantees error-free reception. It is an unreliable transport protocol and therefore does not use sequence numbers and acknowledgments.
    2) It only supports point-to-point link communication and does not support multi-point links.
    3) PPP only supports full-duplex links
    4) The two ends of PPP can run different network layer protocols, but still use the same PPP protocol for communication
    5) PPP is byte-oriented, when the information segment appears consistent with the flag field PPP has two different processing methods for different bit combinations: byte stuffing for asynchronous lines and bit stuffing for synchronous lines.
    Composition: Link Control Protocol LCP, Network Control Protocol NCP, a method for encapsulating IP datagrams into serial links.
  13. HDLC protocol
    High-level data link control protocol is a bit-oriented data link layer protocol developed by ISO. The protocol does not depend on any character encoding set; data packets can be transparently transmitted, and the "0-bit filling method" used to realize transparent transmission is easy to implement in hardware; full-duplex communication is adopted, which has high data link transmission efficiency ; All frames are checked by CRC, and the information frames are numbered sequentially, which can prevent missing or retransmission, and the transmission reliability is high; the transmission control function is separated from the processing function, which has greater flexibility
  14. ICMP message classification: ICMP error report message, ICMP query message
    ICMP error report message: destination unreachable, source suppressed, time exceeded, parameter problem, change route (redirection)
    ICMP query message: echo request and reply Message, Timestamp Request and Reply Message, Mask Address Request and Reply Message, Router Inquiry and Advertisement Message
    Common Applications: Packet Network Indirect Detection PING (used to test the connectivity between two hosts, using ICMP loopback request and reply packets) and
    Traceroute (can be used to track the route a packet has taken, using ICMP Time Exceeded packets)
  15. Six stages of database design
    Requirements analysis: analyze user, data, function and performance requirements
    Conceptual structure design: draw ER diagram
    Logical structure design: convert ER diagram into tables
    Physical database design: select storage structure and storage path for
    database Database implementation: programming, Test, Run
    Database Operation and Maintenance: Routine Maintenance
  16. List several table connection methods, what is the difference?
    Inner join, self join, outer join (left, right, full), cross join Inner join: Only the two element tables that match can be displayed in the result set. Outer join: Left outer join: The drive table is on the left, all the data in the drive table will be displayed, and the unmatched data in the matching table will not be displayed. Right outer join: the right side is the driving table, all the data in the driving table will be displayed, and the unmatched data in the matching table will not be displayed. Full outer join: All unmatched data in the joined tables will be displayed. Cross connection: Cartesian effect, the displayed result is the product of the number of linked tables
  17. What are the functions of the static keyword in C language?
    Answer: In C, static mainly defines global static variables (limited scope), defines local static variables (changes storage area), and defines static functions (limited scope). (It can also be used to define static functions and static members in c++)
  18. What can't be done as a switch() parameter?
    Answer: For real variables (float, double, string), parameters that can be used as switch() parameters are: byte, char, short, int, long, bool, integer type and enumeration type
  19. Which one is more efficient without compiler optimization?
    Answer: The pre-increment operator is more efficient. Because the pre-increment operator first increments itself, then returns itself; the post-operator first creates a copy of itself, then increments itself, and then returns the copy.
  20. The difference between alloc malloc calloc realloc?
    Answer: alloc applies for space from the stack.
    malloc applies for space from the heap. The parameter is the size of the space.
    Calloc is similar to malloc. The parameters are the number and size of elements.
    realloc is to reallocate space for a pointer that has been allocated an address.
  21. C language parameter push order?
    Answer: from right to left
  22. What is a callback function?
    Answer: The callback function is to define a function, and then pass this function as a parameter into the subroutine, which is called by the function pointer when the subroutine is running.
  23. If const is on the left side of the asterisk, const is used to modify the variable pointed to by the pointer, that is, the object pointed to by the pointer is a constant; if const is on the right side of the asterisk, const is used to modify the pointer itself, that is, the pointer itself is a constant
  24. What is const_cast?
    Answer: const_cast enforces type conversion and removes the const type of the variable
  25. How does the C language implement encapsulation?
    Answer: Simulate the encapsulation of classes with function pointers in c language
  26. explicit keyword? =delete identifier? =default identifier?
    Answer: explicit: means that only explicit constructors are supported, and implicit conversion is not supported Note: A a(name) is supported but A a='name' =delete:
    delete the default constructor=default: display the default constructor
  27. The difference between new delete and malloc free?
    Answer: new: allocate space, call constructor delete: call destructor, release space malloc and free will not call construction and destruction When new and delete are a basic data type instead of an object, construction and destruction will not be called new It should be used together with delete, especially when applying for object arrays
  28. Index of the database
    The index of the database is divided into B+ tree index (or B tree index) and hash index. Although the hash index is fast, the maintenance cost is high. In order to reduce the hash collision rate, it is necessary to maintain an appropriate filling factor. That is to say, additional space needs to be wasted, and hashing needs to be expanded at the same time. Expansion is a costly operation, because it needs to recalculate the hash value of all key-value pairs already on the hash table and allocate them to new space correct position on the . The B+ tree index is divided into intermediate nodes and leaf nodes. The size of each node is the same as the size of the page defined by the database (the database reads the disk in units of pages). The intermediate node is only used for indexing and contains n+1 pointers. The pointer of the next layer and n index keys, and the leaf node includes n index keys and n pointers to the record corresponding to the index key. The benefits of the B+ tree are fast indexing, cache-friendly, and support for traversal or range queries based on the index.
  29. The function pointer actually points to the entry address of the function.
  30. How to use C++ smart pointer?
    The principle of C++ smart pointer is the RAII technology of C++. In layman's terms, when a variable is allocated on the stack, the destructor will be called when it leaves the scope, so the approximate implementation of the smart pointer class is to have a Pointer p, and then the destructor of this class is to delete this p pointer. In this way, when you define a local variable (allocated on the stack) for an object of this class, when this object leaves its own scope, it The destructor of is called, thus deleting the p pointer it holds. In actual implementation, complex issues such as thread safety also need to be considered.
    There are currently three types of smart pointers in C++11
    : unique_ptr, shared_ptr, and weak_ptr.
    Unique_ptr is a smart pointer that independently occupies the object pointed to by a pointer. When the smart pointer leaves its scope, the object will be destroyed.
    shared_ptr allows multiple smart pointers to point to the same object, and uses a reference counting mechanism. When a shared_ptr points to this object, the reference count will increase by one. When all shared_ptr pointing to this object leave the scope, this object will be destruct.
    But shared_ptr cannot solve the problem of circular reference, consider the following code:
    class A { public: shared_ptr<B> p; }; class B { public: shared_ptr<A> p; }; int main() { while (true) { shared_ptr<A> pa(new A());










    shared_ptr<B> pb(new B());
    pa->p = pb;
    pb->p = pa;
    }
    return 0;
    }
    Since the reference counts of pa and pb cannot become 0 when leaving the scope, A The object and the B object cannot be destroyed, so this while loop will gradually leak memory, and eventually cause new to fail to allocate more memory and crash. weak_ptr solves the above problem, because when using weak_ptr to point to an object, the reference count maintained by the compiler will not increase, so replacing any shared_ptr above with weak_ptr can solve the problem.
  31. Database, what are the types of data models, and name at least two of
    their
    characteristics
    .
    Mesh model: A node can have more than one parent, allowing more than one node to have no parents.
    2. Relational model: simple concept, clear structure, easy to learn and use for users
    3. Object-oriented model
    4. Object-relational model
  32. How to prevent deadlock in the database
    Deadlock occurs when multiple processes access the same database, and each process holds a lock that is required by other processes, so that each process cannot continue. Understanding the causes of deadlocks, especially the four necessary conditions for deadlocks, can avoid, prevent and resolve deadlocks as much as possible. The following methods help minimize deadlocks: (1) Access objects in the same order. (2) Avoid user interaction in transactions. (3) Keep transactions short and in one batch. (4) Use a low isolation level. (5) Use binding connection.
  33. FPGA FPGA (Field-Programmable Gate Array), that is, Field Programmable Gate Array, is a product of further development on the basis of programmable devices such as PAL, GAL, and CPLD. It emerged as a semi-custom circuit in the field of application-specific integrated circuits (ASIC), which not only solves the shortcomings of custom circuits, but also overcomes the shortcomings of the limited number of original programmable device gates.
  34. What is the difference between DMA and interrupt data transmission
    Although the interrupt responds to the device in real time, it still needs to read and write data through the CPU, which affects the data transfer rate, and executes multiple instructions to execute the interrupt response, so Still reduces the CPU efficiency. DMA is a direct memory access technology. It can avoid the CPU, let the CPU suspend control of the bus, occupy the bus alone to achieve data transmission between peripherals and memory, and release the bus and notify the CPU to regain control of the bus when the transmission is over. In this way, the data transfer rate is basically only limited by the performance of the memory.
    PCs are generally equipped with DMA controllers to increase data throughput, reduce the burden on the CPU, and increase the operating speed of the operating system.
    The general process of DMA is as follows:
    1. Send a hold signal to the CPU;
    2. After the CPU returns the HLDA signal, take over and control the bus and enter the DMA mode;
    3. Send out address information, address the memory and modify the address pointer;
    4. , send out read, write and other control signals;
    5. determine the number of bytes to be transferred, and judge whether the DMA transfer is over;
    6. send out the DMA end signal to make the CPU return to normal working state
  35. The 80x86 addressing mode
    IA-32 structure CPU usually has four main addressing modes: immediate addressing, register operand addressing, (the first two types are relatively simple, similar to single-chip microcomputers), IO port addressing (in mentioned in question 10) and memory operand addressing. The following is a detailed explanation of the memory operand addressing mode: 8086 adopts segment addressing mode, and has 4 16-bit segment registers: CS: code segment register; DS: data segment register; SS: stack segment register; ES: additional segment register (section two data segments). The 8086 divides the 1MB storage space into several segments, and each segment is identified by a segment register. The initial value of each segment (segment base value) is obtained as follows: segment base value = (segment register content) * 16 = (segment register content) * 10H (that is, shift four bits to the left). 8086 physical address formation method: segment address: offset; that is, add the offset address to the base address of the segment. The offset address is stored in IP (code pointer), SP (stack pointer), SI (source index register), DI (destination index register), or use an effective address EA, the content of the EA address is the offset.
    The 80286 began to have two different memory management modes: real address and protection mode. Later, the physical storage space of the CPU gradually increased, and they all had these two management modes. The real address method is similar to the 8086-like physical address generation. In protected mode, the CPU can provide a certain size of virtual storage space, which is generally much larger than the physical address space. When the program is executed, if the corresponding program segment is not loaded into the memory, an interrupt is sent to the operating system, and the required program segment and data are transferred from the external storage to the internal memory, so the user is not limited by the actual physical space size, which is beneficial Develop large-scale programs
  36. Channel sharing technology:

Channel sharing technology is also called multiple access (multiple access) technology, including random access and controlled.

In terms of levels, channel sharing is accomplished by the media access control MAC sublayer of the data link layer. Generally speaking, the channel sharing techniques used in computer networks can be divided into three types, namely random access, controlled access and channel multiplexing.

1. Random access, which is characterized in that all users can randomly send information to the channel according to their own wishes. When two or more users are sending information on a shared channel, a collision (collision) occurs, which causes the user's sending to fail. The random access technology is mainly to study the network protocol for resolving conflicts. Random access is actually contention access, and the contention winner can temporarily occupy the shared channel to send information. The characteristics of random access are: stations can send data at any time, contend for channels, and are prone to conflicts, but can flexibly adapt to changes in the number of stations and their traffic. Typical random access technologies include ALOHA, CSMA, and CSMA/CD. It will be introduced in detail in the following chapters.

2. Controlled access, which is characterized by the fact that each user cannot access the channel at will but must obey certain controls. It can be divided into centralized control and decentralized control.

The main method of centralized control is the polling technology, which is divided into polling and transfer polling. The polling host asks each station one by one whether there is data in order, and the polling host sends polling to a substation first. Information, if the station completes the transmission or has no data transmission, it will send a poll to its adjacent stations. After all the stations are processed in turn, the control returns to the host.

The main method of decentralized control is token technology, and the most typical application is token ring network. The principle is that each host on the network has equal status, and there is no host responsible for channel allocation. There is a special frame on the ring network. , called a token, and the token is continuously circulated on the ring network, and only the obtained host has the right to send data.

3. Channel multiplexing refers to multiple users sharing channels through multiplexers (multiplexers) and demultiplexers (demultiplexers). Channel multiplexing is mainly used to combine multiple low-speed signals into a mixed high-speed signal. transmission on the channel. It is characterized by the need for additional equipment and centralized control, and its access method is to scan each port sequentially, or use interrupt technology.

What three types of tables can a relationship have?

Base tables: Logical representations of the actual stored data

Query table: the table corresponding to the query result

View table: a virtual table that does not correspond to the actual stored data

Explain the three types of integrity constraints for relationships

Entity integrity: Each tuple in a relational database can be uniquely distinguished, and such constraints are guaranteed by entity integrity

Referential integrity:

User defined integrity:

What is a data dictionary?

The data dictionary is a set of system tables inside the relational database management system, which records all the definition information in the database:

Relational schema definition, view definition, index definition, integrity constraint definition, statistics.

When the relational database management system executes the SQL data definition statement, it actually updates the corresponding information in the data dictionary table

How to ensure the security of the database?

Common Methods of Database Security Control

User Identification and Authentication

access control

view

audit

data encryption

What is an audit?

Enable a dedicated audit log (Audit Log)

All user operations on the database are recorded on it

Auditors utilize audit logs

Monitor various activities in the database to find out who, when and what is illegally accessing data

DBMS with a security level above C2 must have an audit function

What audit events are there?

server events

Audit database server events (server start, stop, load, etc.)

system Authority

Auditing of operations on structure or schema objects owned by the system

The permission required for this operation was obtained through system permissions

statement event

Auditing of SQL statements, such as DDL, DML, DQL and DCL statements

Schema Object Events

Auditing of SELECT or DML operations performed on specific schema objects (updata delete)

What is database integrity?

Data is realistic, logical, and semantic

What are the integrity constraint naming clauses?

<Integrity constraints> include NOT NULL, UNIQUE, PRIMARY KEY phrase, FOREIGN KEY phrase, CHECK phrase, etc.

What does the cursor do?

Cursor is a data buffer created by the system for users to store the execution results of SQL statements

Each cursor area has a name

Users can use SQL statements to obtain records from the cursor one by one, assign them to the main variable, and hand them over to the main language for further processing

Briefly describe the system call process in the operating system

The system call provides the interface between the user program and the operating system, and the application program communicates with the rest of the OS through the system call and obtains its services. System calls are available not only to all applications, but also to other parts of the OS itself, such as command handlers.

System call processing steps (three steps):

First, the state of the processor is changed from the user state to the system state; then the general processing of the system call is performed by the hardware and the kernel program, that is, the CPU environment of the interrupted process is first protected, and the processor state word PSW, program counter PC, system Push the call number, user stack pointer, and general-purpose register content into the stack; then transfer the user-defined parameters to the specified address and save them.

Secondly, analyze the type of system call and transfer to the corresponding system call processing subroutine. (By searching the system call entry table, find the entry address of the corresponding processing subroutine and execute it.)

Finally, after the execution of the system call processing subroutine, the CPU site of the interrupted or new process should be restored, and then the interrupted process or the new process should be returned to continue to execute.

Basic approach to dealing with deadlocks:

1. Prevent deadlock: This is a relatively simple and intuitive method of prior prevention. The method is to prevent deadlock by setting certain restrictions to destroy one or several of the four necessary conditions for deadlock. Deadlock prevention is an easy-to-implement method that has been widely used. However, because the imposed restrictions are often too strict, it may lead to a decrease in system resource utilization and system throughput.

2. Avoid deadlock: This method is also a preventive strategy in advance, but it does not need to take various restrictive measures in advance to destroy the four necessary conditions for deadlock, but in the process of dynamic allocation of resources, use Some way to prevent the system from entering an unsafe state, thereby avoiding deadlocks.

3. Deadlock detection: This method does not need to take any restrictive measures in advance, nor does it need to check whether the system has entered an unsafe area. This method allows the system to deadlock during operation. However, the detection mechanism set up by the system can detect the occurrence of deadlock in time, and accurately determine the processes and resources related to the deadlock, and then take appropriate measures to remove the deadlock that has occurred from the system.

4. Deadlock removal: This is a measure that goes hand in hand with deadlock detection. When it is detected that a deadlock has occurred in the system, the process must be released from the deadlock state. A common implementation method is to revoke or suspend some processes in order to reclaim some resources, and then allocate these resources to processes that are already in a blocked state, so that they can be turned into a ready state to continue running.

18. What is a sliding window protocol

The sliding window protocol is a flow control method used by TCP. The protocol allows the sender to send multiple packets consecutively before stopping and waiting for an acknowledgment. Since the sender does not have to stop and wait for confirmation every time a packet is sent, the protocol can speed up the transmission of data.

Tell me about the router

Traditionally, a router works on the third layer of the OSI seven-layer protocol, and its main task is to receive a data packet from a network interface, and decide to forward it to the next destination address according to the destination address contained therein. Therefore, the router first removes the layer 2 header of the data packet, takes out the destination IP address, and searches for its corresponding next-hop address in the forwarding routing table. If found, it adds the next MAC address before the frame of the data packet, and at the same time The TTL (Time To Live) field of the packet header is also reduced by one, and the checksum is recalculated. When a packet is sent to an output port, it needs to wait in order to be delivered on the output link.



 

Guess you like

Origin blog.csdn.net/weixin_45930223/article/details/129387263