Interview tidy up to memorize

1. c\c++ language part

1. In the C language, the keyword static has three obvious functions:

A. Once declared as a static variable, it will exist forever at compile time and is not bound by the scope of the scope. However, if it is a local static variable, this static variable can only be used in the local scope, and cannot be used outside the scope, but it does still exist. Occupies memory, still exists.
B. In the module (but outside the function body), a variable declared as static can be accessed by functions used in the module, but cannot be accessed by other functions outside the module. It is a local global variable.
C. In a module, a function declared as static can only be called by other functions in this module. That is, the function is restricted to the local scope of the module in which it is declared.

2. What does the keyword volatile mean? And give three different examples.

A variable defined as volatile indicates that the variable may be changed unexpectedly, and the compiler will not assume the value of this variable. To be precise, the optimizer must carefully re-read the value of the variable every time it uses a variable modified by volatile, instead of using a backup stored in a register.
Examples of using volatile variables:
A. Hardware registers of parallel devices (such as: status registers)
B. Non-automatic variables (Non-automatic variables) that will be accessed in an interrupt service subroutine
C. Used by several tasks in multi-threaded applications shared variable

In-depth understanding:

(1) Can a parameter be both const and volatile? explain why.
Yes. An example is a read-only status register. It is volatile because it can be changed unexpectedly. It is const because programs should not attempt to modify it.
(2) Can a pointer be volatile? explain why.
Yes. Although this is not very common. An example is when a service subroutine modifies a pointer to a buffer.

3. What is the difference between reference and pointer?

A、应用必须初始化,指针不必;

B、引用处画化后不能改变,指针可以被改变;

C、不存在指向空值的引用,但存在指向空值的指针;

4. What is the role of ifndef/define/endif in the h header file?
Answer: Prevent the header file from being repeatedly referenced.
5. Program memory allocation

A. Stack area (stack) - automatically allocated and released by the compiler, storing function parameter values, local variable values, etc. It operates like a stack in a data structure.
B. Heap area (heap) - generally allocated and released by the programmer, if the programmer does not release it, it may be recycled by the OS at the end of the program. Note that it is different from the heap in the data structure, and the allocation method is similar to a linked list, haha.
C. Global area (static area) (static) - global variables and static variables are stored together, initialized global variables and static variables are in one area, uninitialized global variables and uninitialized static variables are adjacent another area of ​​. Released by the system after the program ends.
D. Literal constant area - the constant string is placed here. Released by the system after the program ends.
E. Program code area - store the binary code of the function body

这是一个前辈写的,非常详细
//main.cpp
  int a=0;    //全局初始化区
  char *p1;   //全局未初始化区
  main()
  {
   intb;栈
   char s[]="abc";   //栈
   char *p2;         //栈
   char *p3="123456";   //123456\0在常量区,p3在栈上。
   static int c=0;   //全局(静态)初始化区
   p1 = (char*)malloc(10);
   p2 = (char*)malloc(20);   //分配得来得10和20字节的区域就在堆区。
   strcpy(p1,"123456");   //123456\0放在常量区,编译器可能会将它与p3所向"123456"优化成一个地方。
}

6. The difference between typedef and define and const

typedef是C语言中用来声明自定义数据类型,配合各种原有数据类型来达到简化编程的目的的类型定义关键字。 #define是预处理指令,是宏定义

7. Unit testing

  • Unit testing:  Unit testing is the testing work for the correctness of the basic components of software (the smallest unit of software design), such as functions, procedures (function, procedure) or methods of a class (method).

  • Integration testing:  Integration testing is based on unit testing, assembling all modules into subsystems or systems according to the outline design requirements, and verifying whether the functions after assembly and the interfaces between modules are correct. Integration testing is also called assembly testing, joint testing, subsystem testing, or component testing.

  • System testing:  system testing is a series of strict and effective testing of the computer system in the actual operating environment by combining the software that has been integrated tested as a part of the computer system with other parts of the system to discover the potential potential of the software. problem, to ensure the normal operation of the system.

8. const constant

  • Variables declared with const are read-only
  • const must be assigned an initial value
  • Member functions modified by const cannot be changed
  • Classes modified by const can only call const member functions
  • const pointer
  • const is better than define because of safety checks

9. The difference between sizeof and strlen

(1) sizeof is an operator, and strlen is a library function;
(2) The parameter of sizeof can be a data type or a variable, while strlen can only use a string ending with '\0' as a parameter; (
3 ) The compiler calculates the result of sizeof at compile time. The strlen function must be calculated at runtime. And sizeof calculates the size of the memory occupied by the data type, while strlen calculates the actual length of the string; (
4) The parameter of sizeof in the array does not degenerate, and it degenerates into a pointer when passed to strlen

10. The difference between memcpy and strcpy

(1) The operation objects are different:
the two operation objects of strcpy are both strings,
the operation source object of sprintf can be a variety of data types, and the destination operation object is a string. The
two objects of memcpy are memory addresses that can be operated by two people. Not limited to any data type.
(2) Different execution efficiencies:
memcpy is the highest, followed by strcpy, and sprintf has the lowest efficiency.
(3) The implementation functions are different:
strcpy mainly realizes the copy between string variables
sprintf mainly realizes the conversion of other data type format to string
memcpy mainly realizes the copy between memory blocks

11. Three major characteristics of object-oriented
Three major characteristics of object-oriented: encapsulation, inheritance, polymorphism
12 The difference between malloc and new

①, malloc/free is a standard library function of C++/C language, and new/delete is an operator of C++.
②. The type returned by malloc memory allocation successfully is void*, which needs to be converted to the type we need through forced type conversion.
③. When the new memory allocation fails, a bac_alloc exception will be thrown, and NULL will not be returned; and when the malloc allocation fails, NULL will be returned.
④. Using the new operator to apply for memory allocation does not need to specify the size of the memory block, while malloc needs to explicitly indicate the required memory size.

13 What is the difference between the static keyword in C language and the static keyword in C++?

In C, static is used to modify local static variables and external static variables and functions.
In addition to the above functions, C++ can also be used to define member variables and functions of a class. That is, static members and static member functions.

The memory and global characteristics of static during programming allow functions called at different times to communicate and transfer information, while static members of C++ can communicate and transfer information between multiple object instances.

14. What is the difference between declaration and definition of variables?

Assigning address and storage space to variables is called definition, and not assigning address is called declaration. A variable can be declared in multiple places, but can only be defined in one place. Adding extern modification is the declaration of the variable, indicating that this variable will be defined outside the file or in the back part of the file.

15. Briefly describe the principle of polymorphic implementation

When the compiler finds that there is a virtual function in a class, it will immediately generate a virtual function table vtable for this class, and each item of the virtual function table is a pointer to the corresponding virtual function. The compiler will also implicitly insert a pointer vptr into this class (it will be inserted in the first position for the vc compiler), pointing to the virtual function table. When calling the constructor of this class, in the constructor of the class, the compiler will implicitly execute the associated code of vptr and vtable, point vptr to the corresponding vtable, associate the class with the vtable of this class, and call the class When constructing a function,The pointer to the base class has now become the this pointer to the concrete class, so relying on this can point to vtable, so that it can really link with the function, this is dynamic linking, the basic principle of polymorphism ----- virtual function is the basis of polymorphism

16. Talk about the understanding of object-oriented

Object-oriented can be understood as treating every problem, and it is first necessary to determine that the problem is composed of several parts, and each part is actually an object. Then design these objects separately, and finally get the whole program. Traditional programming is mostly considered and designed based on the idea of ​​function, while object-oriented programming is considered from the perspective of objects. Doing so can make the program more concise and clear.
Explanation: The "object-oriented programming technology" that is most contacted in programming is only a component of object-oriented technology. Making full use of the advantages of object-oriented technology is a comprehensive technical issue, which requires not only object-oriented analysis, design and programming technology, but also necessary modeling and development tools.

17. Can a constructor become a virtual function?

  • Constructors cannot be virtual functions. And the virtual function cannot be called in the constructor, because then the corresponding function of the parent class is actually executed, because it has not been constructed yet.
  • Destructors can be virtual functions, and, in a complex class structure, this is often necessary. The destructor can also be a pure virtual function, but the pure virtual destructor must have a defined body, because the call of the destructor is implicit in the subclass.
  • Explanation: The dynamic binding feature of virtual functions is the key technology to realize overloading. Dynamic binding queries the virtual function table of the corresponding class according to the actual calling situation, and calls the corresponding virtual function

18 Causes of wild pointers and how to avoid them

  • Case 1
    The reason is that the pointer variable is not initialized when it is declared.
    Solution The pointer is initialized when it is declared, it can be a specific address value, or it can point to NULL.
  • Case 2
    The cause is that the pointer p is not set to NULL after being freed or deleted.
    Solution After the memory space pointed to by the pointer is released, the pointer should point to NULL.
  • Situation 3
    Cause The pointer operation is beyond the scope of the variable.
    The solution is to release the variable's address space and make the pointer point to NULL before the variable's scope ends.
  • Note that the solution to "wild pointers" is also the basic principle of programming specifications. When using pointers, you must avoid "wild pointers" and check the validity of the pointers before using them.

19. Common test code

1.排序:快拍,冒泡,shell
2.单双链表:
3.计算二叉树的深度
4.二叉树的增删查改
5.查找数组里面第几大的数
6.判断字符串是否相等
7.字符串的复制

20. What is the difference between a linked list and an array

>指针和数组区别
  • Arrays are created either in static storage (like global arrays) or on the stack. A pointer can point to any type of memory block at any time.
    The capacity (in bytes) of an array can be calculated using the operator sizeof. sizeof§,p gets the number of bytes of a pointer variable for a pointer, not the memory capacity pointed to by p. The C++/C language has no way to know the memory capacity pointed to by the pointer, unless it is remembered when applying for memory. Note that when an array is passed as an argument to a function, the array automatically degenerates into a pointer of the same type.

Arrays and linked lists have the following differences:
(1) Storage format: an array is a continuous space, and the length must be determined when it is declared. The linked list is a discontinuous dynamic space with variable length, and each node must save the adjacent node pointer;
(2) Data search: the linear search speed of the array is fast, and the search operation directly uses the offset address. Linked lists need to retrieve nodes in order, which is inefficient;
(3) Data insertion or deletion: Linked lists can quickly insert and delete nodes, while arrays may require a large amount of data movement
; There is an out of bounds problem.

21. What is the general cause of stack overflow?
Answer: 1. Garbage resources are not recycled
2. Recursive calls with too deep layers

22. The parameter type of switch() cannot be used
Answer: The parameter of switch cannot be real type (float double or the like).

2. Network programming part

1. The difference between TCP and UDP

  • TCP: It is a connection-oriented stream transmission control protocol with high reliability, which ensures the correctness of transmitted data, and has a verification and retransmission mechanism, so that there will be no loss or disorder.

  • UDP: It is a connectionless datagram service. It does not check and modify the datagram, and does not need to wait for the response of the other party. There will be packet loss, repetition, and disorder, but it has better real-time performance. The segment structure of UDP is better than that of TCP. Simple, so the network overhead is also small
    2, flow control and congestion control

  • Congestion control
    Network congestion refers to the phenomenon that the number of packets arriving at a certain part of the communication subnet is too large, which makes this part of the network too late to process, resulting in a decline in the performance of this part or even the entire network. In severe cases, it may even cause network communication services to come to a standstill. That is, a deadlock phenomenon occurs. Congestion control is a mechanism to deal with network congestion.

  • Flow control
    In the process of data transmission and reception, it is likely that the receiver is too late to receive it. At this time, it is necessary to control the sender to avoid data loss.

3. The specific process of the 3-way handshake when the tcp connection is established and the 4-way handshake when the connection is disconnected

  • The 3-way handshake protocol used to establish a connection specifically refers to:
    the first handshake is when the client connects to the server, and after the server accepts the client's request, it sends a message to the client, which is the second handshake, and the third handshake is What the client sends to the server is the confirmation of the second handshake message. After that, the client and
    server start to communicate.
  • The 4-way handshake for disconnection is as follows:
    the disconnected end sends a close request is the first handshake, and the other end needs to confirm the close after receiving the disconnection request, and sends a message, which is the second time Handshake. After sending the confirmation
    message, it is necessary to send a close message to the peer. To close the connection to the peer, this is the third handshake. After the end that originally sent the disconnection receives the message, it enters a very important State
    time_wait state, the last handshake is an acknowledgment of the message after the end that originally sent the disconnection received the message.

4. The difference between epoll and select

  • The maximum fd opened by select in a process is limited, set by FD_SETSIZE, and the default value is 2048. but
  • epoll does not have this limitation. The upper limit of fd it supports is the maximum number of files that can be opened. This number is generally much greater than 2048. Generally speaking, the larger the memory, the larger the upper limit of fd, and 1G memory can reach about 10w.
  • The polling mechanism of select is that the system will check whether the data of each fd is ready. When there are many fds, the efficiency will of course plummet. Epoll adopts an event-based notification method. Once a certain fd data is ready, the kernel will A callback mechanism similar to callback is used to quickly activate this file descriptor without continuously polling to find ready descriptors. This is the most essential reason for epoll's high efficiency.
  • Whether it is select or epoll, the kernel needs to notify the user space of the FD message. How to avoid unnecessary memory copying is very important. In this regard, epoll is realized by the kernel and user space mmap of the same memory, while select does unnecessary copying.

5. The difference and implementation principle of et and lt in epoll

  • LT: Level trigger, the efficiency will be lower than ET trigger, especially in the case of large concurrency and large traffic. However, LT has relatively low requirements for code writing and is not prone to problems. The performance of LT mode service writing is: as long as there is data that has not been acquired, the kernel will keep notifying you, so you don't have to worry about event loss.
  • ET: Edge-triggered, very efficient. In the case of concurrency and large traffic, there are many fewer epoll system calls than LT, so the efficiency is high. However, the programming requirements are high, and each request needs to be handled carefully, otherwise it is easy to lose events.

6. How to synchronize multiple threads

The most commonly used multi-thread synchronization in Linux systems are: mutexes, condition variables and semaphores.
https://blog.csdn.net/qq_17308321/article/details/79929623

4. The method and advantages and disadvantages of inter-process communication

  • A. Pipeline (pipe)
    Pipeline is a half-duplex communication method, data can only flow in one direction, and can only be used between processes with kinship. Process affinity usually refers to the parent-child process relationship.
  • B. The well-known pipe
    The well-known pipe is also a half-duplex communication method, but it allows communication between unrelated processes.
  • C. Semaphore (semophore)
    A semaphore is a counter that can be used to control access to shared resources by multiple processes. It is often used as a locking mechanism to prevent other processes from accessing shared resources while a process is accessing the resource. Therefore, it is mainly used as a means of synchronization between processes and between different threads in the same process.
  • D. Message queue (message queue)
    A message queue is a linked list of messages stored in the kernel and identified by a message queue identifier. The message queue overcomes the disadvantages of less signal transmission information, the pipeline can only carry unformatted byte streams, and the buffer size is limited.
  • E. Signal ( sinal )
    The signal is a relatively complex communication method used to notify the receiving process that a certain event has occurred.
  • F. Shared memory (shared memory)
    Shared memory is to map a section of memory that can be accessed by other processes. This shared memory is created by one process, but multiple processes can access it. Shared memory is the fastest IPC method, and it is specially designed for the inefficiency of other inter-process communication methods. It is often used in conjunction with other communication mechanisms, such as semaphores, to achieve synchronization and communication between processes.
  • G. Socket (socket) The socket is also an inter-process communication mechanism. Unlike other communication mechanisms, it can be used for different and inter-process communication.

Advantages and disadvantages

  • A. Unnamed pipes are simple and convenient, but they are limited to one-way communication, and can only be shared among relative processes; although named pipes can be used by processes with any relationship, they exist in the system for a long time In, improper use is error-prone
  • B. The message queue can no longer be limited to the parent-child process, but allows any process to achieve inter-process communication by sharing the message queue, and the system calls the function to realize the synchronization between message sending and receiving, so that users can use message buffering Synchronization issues no longer need to be considered when communicating. It is easy to use, but the copying of information needs to consume extra CPU time, so it is not suitable for occasions with a large amount of information or frequent operations.
  • C. In view of the shortcomings of message buffering, shared memory uses the memory buffer to directly exchange information without copying, and its advantages are fast and large amount of information. But the communication method of shared memory is realized by directly attaching the shared memory buffer to the virtual address space of the process. Therefore, the synchronization problem of read and write operations between processes cannot be realized by the operating system, and must be solved by each process using other synchronization tools. In addition, since the memory entity exists in the computer system, it can only be shared by processes in the same computer system, and cannot communicate through the network.

5. Signal amount

A semaphore is an inter-process communication mechanism used to solve the synchronization and mutual exclusion problems between processes, including a variable called a semaphore and a process waiting queue waiting for resources under the semaphore, as well as two operations on the semaphore. An atomic operation (P/V operation). Wherein, the semaphore corresponds to a certain resource, and takes a non-negative integer value. The semaphore value (commonly represented by sem_id) refers to the number of resources currently available. If it is equal to 0, it means that there are no resources currently available.

● P operation: If there are available resources (semaphore value > 0), the process in which this operation is located occupies a resource (at this time, the semaphore value is reduced by 1, and enters the critical section code); if there is no available resource (semaphore value =0), the process where this operation is performed is blocked until the system allocates resources to the process (enters the waiting queue and waits until the resource is the turn of the process).
● V operation: If there is a process waiting for a resource in the waiting queue of the semaphore, wake up a blocked process; if there is no process waiting for it, release a resource (that is, add 1 to the semaphore value).

6. The whole process after entering www.baidu.com in the browser *
7. How to guarantee the reliability of TCP?
The reliability of TCP is achieved through sequence numbering and acknowledgment (ACK).
8. OSI

1. The seven-layer protocol of OSI is
application layer, presentation layer, session layer, transport layer, network layer, physical link layer, physical layer
1.1, TCP/IP layered (4 layers)
link layer, network layer, transport layer , application layer
1.2, five-layer protocol (5 layers)
physical layer, data link layer, network layer, transport layer, application layer

9. What is the difference between paging and segmentation (memory management)?

Segment storage management is a memory allocation management scheme that conforms to the user's perspective. In segment storage management, the address space of the program is divided into several segments (segments), such as code segment, data segment, and stack segment; in this way, each process has a two-dimensional address space, which is independent of each other and does not interfere with each other. The advantage of segment management is that there is no internal fragmentation (because the segment size is variable, change the segment size to eliminate internal fragmentation). However, when segments are swapped in and out, external fragments will be generated (for example, if a 4k segment is replaced with a 5k segment, 1k external fragments will be generated)

The paging storage management scheme is a memory allocation management scheme that separates user-perspective memory from physical memory. In paging storage management, the logical address of the program is divided into fixed-size pages (page), and the physical memory is divided into frames of the same size. When the program is loaded, any page can be placed into any frame in the memory. Frames do not have to be contiguous, allowing discrete separation. The advantage of paging storage management is that there is no external fragmentation (because the page size is fixed), but internal fragmentation (a page may not be filled).

The difference between the two:

The purpose is different: paging is due to the needs of system management rather than the needs of users, it is a physical unit of information; the purpose of segmentation is to better meet the needs of users, it is a logical unit of information, it contains a set of other relatively complete information;

Different sizes: the size of a page is fixed and determined by the system, while the length of a segment is not fixed and is determined by the function it performs;

The address space is different: the segment provides the user with a two-dimensional address space; the page provides the user with a one-dimensional address space;

Information sharing: a segment is a logical unit of information, which is convenient for storage protection and information sharing, and page protection and sharing are limited;

Memory fragmentation: The advantage of page storage management is that there is no external fragmentation (because the size of the page is fixed), but internal fragmentation (a page may not be filled); the advantage of segment management is that there is no internal fragmentation (because the segment size is variable) , changing the segment size to eliminate internal fragmentation). However, when segments are swapped in and out, external fragments will be generated (for example, if a segment of 4k is replaced with a segment of 5k, an external fragment of 1k will be generated).

10. What is virtual memory?
1). The development of memory
  has no memory abstraction (single process, except for the memory used by the operating system, all of which are used by user programs) —> there is memory abstraction (multiple processes, independent address spaces for processes, swapping technology (memory size is different) may accommodate all concurrently executing processes)
) —> contiguous memory allocation (fixed size partition (limited degree of multiprogramming), variable partition (first fit, best fit, worst fit), fragmentation) —> no Contiguous memory allocation (segmentation, paging, segment paging, virtual memory)
2). Virtual memory
  Virtual memory allows the execution process not to be completely in memory. The basic idea of ​​virtual memory is: each process has an independent address space, this space is divided into multiple blocks of equal size, called pages (Page), each page is a continuous address. These pages are mapped to physical memory, but not all pages have to be in memory to run a program. When the program refers to a part of the address space in the physical memory, the hardware immediately performs the necessary mapping; when the program refers to a part of the address space that is not in the physical memory, the operating system is responsible for loading the missing part into the physical memory and reinstalling it. Execution of the command failed. In this way, for the process, there seems to be a large memory space logically, but in fact some of them correspond to a piece of physical memory (called a frame, usually the page and frame size are equal), and some of them are not loaded in memory. On the hard disk, as shown in Figure 5.
Note that the request paging system, request segmentation system and request segment paging system are all for virtual memory, and information replacement between memory and external memory is realized through requests.
3). Page replacement algorithm

FIFO first-in-first-out algorithm: It is often used in operating systems, such as job scheduling (mainly simple to implement and easy to think of);

LRU (Least recently use) least recently used algorithm: judge according to the length of use time to now;

LFU (Least frequently use) minimum number of usage algorithm: judge according to the number of times of use;

OPT (Optimal replacement) optimal replacement algorithm: the best in theory, theory; it is to ensure that the pages that are replaced are no longer used, or the latest algorithm used in actual memory.
4). The application and advantages of virtual memory

Virtual memory is well suited for use in multiprogramming systems, where pieces of many programs are kept in memory at the same time. While a program is waiting for part of it to be read into memory, it can dedicate the CPU to another process. The use of virtual memory can bring the following benefits:

  • Multiple processes can be kept in the memory, and the system concurrency is improved
  • The tight constraints between the user and the memory are lifted, and the process can be larger than the entire space of the memory

11. Bumps

Thrashing essentially refers to frequent paging behavior. Specifically, when a page fault occurs in a process, a certain page must be replaced. However, all the other pages are in use, it replaces a page, but immediately needs it again. Therefore, page faults will continue to occur, resulting in a sharp drop in the efficiency of the entire system. This phenomenon is called thrashing (jitter).

Strategies for addressing memory thrashing include:

  • If it is because of a wrong page replacement strategy, you can modify the replacement algorithm to solve this problem;
  • If it is because too many programs are running that the program cannot transfer all frequently accessed pages into the memory at the same time, the number of multi-programs should be reduced;
  • Otherwise, there are two ways left: terminate the process or increase the physical memory capacity.

Guess you like

Origin blog.csdn.net/weixin_40178954/article/details/101395764