Frequency face questions drills

Please answer what wild pointers?

Reference answer:

It is a wild pointer to an object deleted or apply for access to restricted areas of memory pointer.

 

Talk about c ++ conversion of four cast

Reference answer:

C ++ is converted in four types: static_cast, dynamic_cast, const_cast, reinterpret_cast

1、const_cast

const variable is used to non-const

2、static_cast

For a variety of implicit conversion, such as non-transfected const const, void * pointers turn, it can be used for multi-state conversion static_cast up, turn down if successful but unsafe, the result is unknown;

3、dynamic_cast

A dynamic type conversion. Class virtual function only for containing, for up and down conversion between the class hierarchy. You can only turn a pointer or reference. When down conversion, if the pointer is illegal to return NULL, a reference to throw exceptions. To learn more about the principles of internal conversion.

Up-conversion: conversion means is a subclass of the base class

Down conversion: conversion means is the base class to the sub-class

It is determined by the type of operating variables of time and is the same type to convert the time to execute the statement determines whether down conversion can be performed.

4、reinterpret_cast

Almost anything can be transferred, such as an int pointer turn, may be a problem, as little as possible;

5, why not use a cast of C?

C on the cast surface of what can turn looks powerful, but the conversion is not clear enough, can not be checked for errors and error-prone.

 

Please tell us about smart pointers in C ++

Smart pointers primarily used to manage the heap memory allocated, it is a common package stack pointer object. When the end of the life cycle of the stack of objects, the application will be freed in memory destructor, to prevent a memory leak. C ++ 11 most commonly used is the shared_ptr smart pointer type, which uses a reference counting method, the current record number is referenced memory resources smart pointer. The reference count is allocated on the heap memory. Plus 1, decrements the reference count when a new one when the reference count expires. Only when the reference count is 0, the smart pointer will automatically release the memory resource reference. You can not assign an ordinary pointer when initialized directly to shared_ptr smart pointer as a pointer, a class is. It may be passed through an ordinary pointer make_shared the constructor or function. And can be obtained by an ordinary pointer get function.

 

Please answer this question Smart pointers have no memory leak case

When two objects with each other using a shared_ptr member variable pointing at each other, will result in a circular reference, the reference count failure, resulting in a memory leak.

 

Please smart pointers for how to solve memory leaks

To resolve memory leaks caused by circular references introduced weak_ptr weak pointer, the constructor does not modify weak_ptr reference count value, so as not to manage memory object, which is similar to a normal pointer, but does not point to the shared reference count memory, but it can detect whether the managed object has been released in order to avoid unauthorized access.

 

Please look for smart pointers in C ++

Inside the four C ++ smart pointer: auto_ptr, shared_ptr, weak_ptr, unique_ptr wherein the three support is c ++ 11, 11 and the first has been abandoned.

Why use a smart pointer:

The role of smart pointer is a pointer management, because there is such a case: space applications forget to release at the end of the function, resulting in a memory leak. Use smart pointers can largely avoid this problem, because the smart pointer is a class when the class is beyond the scope of that class will automatically call the destructor, the destructor will automatically release resources. So the role of the principle of smart pointer is automatically free up memory space at the end of the function, without having to manually release the memory space.

 

2, unique_ptr (replacing the auto_ptr)

unique_ptr achieve exclusive possession or have strict concept, ensure that only one smart pointer can point to the object in the same time. It is essential to avoid resource leaks (for example, "After creating new objects because an exception occurs and forget to call delete") is particularly useful.

3、shared_ptr

shared_ptr have to achieve shared concept. Multiple smart pointers can point to the same object and its associated resources will "be destroyed last reference" when released. Share can be seen from the name of the resource may be shared by multiple pointers, using the counting mechanism to indicate that resources are shared several pointers. You can see the number of the owner of the resource by the member function use_count (). In addition to be constructed by new, can be constructed by passing auto_ptr, unique_ptr, weak_ptr. When we call to release (), the current pointer will release resources ownership, count by one. When the count is equal to 0, the resource is released

4、weak_ptr

weak_ptr not control is a smart pointer object life cycle, it points to an object managed by shared_ptr. memory management of the object that is referenced by strong shared_ptr. weak_ptr only provides a means of access to managed objects. Weak_ptr purpose designed with a smart pointer is introduced to assist shared_ptr shared_ptr work, it can only be from one or the other weak_ptr shared_ptr object construction, its construction and destructor does not cause increase in the number of reference mark or reduced. weak_ptr is used to solve the deadlock when shared_ptr refer to each other, if two shared_ptr refer to each other, then the two pointer reference count will never drop to 0, the resource is never released. It is a weak reference object, the object will not increase the reference count between, and can be transformed into each other shared_ptr, shared_ptr can be assigned directly to it, it can be obtained by calling the shared_ptr lock function.

 

Please look for memory allocation C ++ / C's

4G 32bitCPU addressable linear space, each process has its own separate 4G logical address, wherein 0 ~ 3G is the user mode space, 3 ~ 4G is the kernel space, the same logical address different processes will be mapped to different physical addresses . Its logical address which is divided as follows:

Each segment as follows:

1G 3G user space and kernel space

Static area:

text segment (code segment): a read-only memory area and a text area, wherein the read-only storage area storing the machine code string constants, storing the program text area.

data segment (data segment): a program stored in the initialized global and static variables

bss segment: storage uninitialized global and static variables (global + local), as well as all global variables are initialized to 0 and static variables, for the uninitialized global variables and static variables, run the main program will be cleared when the unified before . That uninitialized global variable is initialized to 0 compiler

Dynamic area:

heap (heap): When the process is not malloc call is no stack segment, using assigned a heap Only calling malloc, and dynamically increase the heap size (moving break pointer) in the course of running, grows from lower to higher addresses . Using this small area when allocating memory. Identified by the start address of the stack structure in mm_struct start_brk, end address identified by brk.

memory mapping segment (map area): memory mapped files such as dynamic link libraries, apply for large memory (malloc function when calling mmap)

Stack (Stack): using stack space the return address stored function parameters, local variables, return values, growing from higher to lower addresses. When you create a process will have a maximum stack size, Linux can be specified by ulimit command.

 

Please look for shared memory-related api

Reference answer:

Linux allows different processes to access the same logical memory, provides a set of API, in the header file sys / shm.h in.

 

Reference answer:

reactor model requires that the main thread is only responsible if there is a monitor document describes the incident, some words on the incident immediately notify the worker thread, in addition to the main thread without any other substantive work, read and write data, to accept the new connections and handle customer requests are completed in a worker thread. Its model is composed as follows:

1) Handle: that is, the operating system handle is an abstraction of resources at the operating system level, it can be an open file, a connection (Socket), Timer and so on. Reactor pattern is generally used because network programming, and thus generally refers to herein Socket Handle, i.e. a network connection.

2) Synchronous Event Demultiplexer (synchronous event Multiplexer): Handle block waiting for a series of events in coming, if the block waiting to return, it means that in the event type Handle returned can not block execution returned. This module is generally used to select the operating system implementation.

3) Initiation Dispatcher: a management Event Handler, i.e. EventHandler container for registration, EventHandler removed; in addition, it calls the select Synchronous Event Demultiplexer Reactor inlet mode as a method to block waiting for an event to return, when the blocking wait return, according handle events will distribute the corresponding event Handler process, i.e. the handle_event callback () method of EventHandler.

4) Event Handler: define an event handler method: handle_event (), for InitiationDispatcher callbacks use.

5) Concrete Event Handler: event EventHandler interface, the specific event processing logic.

 

Please design your own how to use the single-threaded approach to high concurrency

Reference answer:

In the single thread model can be used I / O multiplexing is used to improve the ability to handle multiple requests in a single thread, then the event-driven model, based on the asynchronous event callbacks to handle

 

Please explain in detail the variable parameters in the template C ++ 11, rvalue references and lambda these new features.

 

 

You talk about the concept of processes and threads, and why have thread process, including what is the difference, but also how they are each synchronized

Encapsulation process is to run the program, the system is the basic unit for resource scheduling and allocation, to achieve the concurrent operating system;

A thread is subtasks process, is the basic unit of CPU scheduling and dispatching for real-time assurance procedures to achieve concurrent internal processes; thread is the smallest unit of execution and scheduling the operating system recognizes. Each thread holds a virtual processor alone: ​​alone registers, instruction counter and processor state. Each thread to perform different tasks, but sharing the same address space (that is, the same dynamic memory mapped files, object code, etc.), open files, queues and other kernel resources.

 

the difference:

1. A thread can belong to only one process, and a process can have multiple threads, but there is at least one thread. Thread depends on the process exist.

2. The process has a separate memory unit in the implementation process, and multiple threads share the process of memory. (Resources allocated to the process, all threads in the same process of sharing all the resources of the process. The same process multiple threads share code segments (code and constant), the data segment (global variables and static variables), the extended segment (heap memory) but each thread has its own stack segment, stack segment called running time, used to store all local variables and temporary variables.)

3. The process is the smallest unit of resource allocation, the thread is the smallest unit CPU scheduling;

4. overhead: Because when you create or undo the process, the system must be recomputed or recycled resources, such as memory space, I / o devices. Therefore, Caozuoxitong pay the cost will be significantly greater than the cost when you create or undo the thread. Similarly, during the process of switching involves saving the entire environment and the current process CPU CPU setting new environmental process is scheduled to run. And thread switching and saving of setting only a small number of registers, the operation does not involve memory management. Visible, the process is much greater than the cost of switching thread switching overhead.

The communication: multiple threads in the same process with the same address space, so that the synchronization and enable communication between them, it becomes relatively easy. Interprocess communication IPC, the threads can read and write directly process data segment (such as global variables) to communicate - need aid process synchronization and mutual exclusion means to ensure data consistency. In some systems, the thread switching, synchronization and communication are without the intervention of the operating system kernel

6. The process of programming and debugging simple high reliability, but creating destroy large overhead; thread on the contrary, small overhead, fast switching speed, but relatively complex programming and debugging.

7. The process does not affect between each other; thread a thread hang the whole process will lead to hang

8. The process to adapt to multi-core, multi-machine distributed; suitable for multi-core thread

 

Interprocess communication way:

Interprocess communication including piping, the IPC system (including message queues, semaphores, signals, shared memory, etc.), and the socket socket.

1. Line:

Pipe conduit includes unnamed and named pipes: pipe may be used for communication between the parent and child named pipe having a genetic relationship addition duct has the function, it also allows communication between unrelated processes

1.1 ordinary pipe PIPE:

1) It is half-duplex (i.e., data flows only in one direction), having a fixed end and a read write terminal

2) It can only be used for communication between processes with a genetic relationship (between parent and child or sibling also process)

3) It can be seen as a special kind of file that can be read for its use ordinary read, write and other functions. But it is not a regular file, it does not belong to any other file system, and exists only in memory.

1.2 Named Pipes FIFO:

1) FIFO data can be exchanged between unrelated processes

2) FIFO has associated therewith a path name, it exists in the file system device in the form of a special file.

 

2. System IPC:

2.1 Message Queuing

Message queue, the message is the link table, stored in the kernel. A message queue is marked by an identifier (i.e., queue ID). (Signaling message queue overcome little information, only the carrier pipe and the plain byte stream buffer size is limited, etc.) have write access process to add new information to the message queue according to a certain rule to obtain; the message queue has read permission must process the information can be read from the message queue;

Features:

1) is recorded for the message queue in which a message having a specific format and a specific priority.

2) independent of the transmission queue and message reception process. When the process terminates, the message queue and its contents will not be deleted.

3) message queue may be implemented random query message, the message need not be read in a FIFO order, it can be read by the message type.

 

2.2 semaphore semaphore

Semaphore (semaphore) and has introduced the IPC structure is different, it is a counter, multiple processes can be used to control access to shared resources. Mutual exclusion semaphore used to implement the synchronization between processes, and not for storing inter-process communication data.

Features:

1) the amount for interprocess synchronization signal, to transfer data requires a combination of shared memory between processes.

2) based on the operation amount signal PV system operation, the program operation is atomic semaphore operation.

3) Each time the operation of the PV signal is not limited to the amount of signal magnitude plus 1 or minus 1, and can be any positive integer addition and subtraction.

4) Support semaphore group.

 

2.3 signal signal

Signal is a more sophisticated means of communication, an event used to notify the receiving process has occurred.

 

2.4 shared memory (Shared Memory)

It allows multiple processes can access the same memory space, in time to see each other different processes can process data in shared memory was updated. In this way it relies on some sort of synchronization, such as mutexes and semaphores

Features:

1) Shared memory is the fastest IPC, because the process is a direct memory access

2) Because multiple processes can operate simultaneously, so the need for synchronization

3) + shared memory semaphore commonly used together, semaphore used to synchronize access to the shared memory

 

3. Sockets SOCKET:

socket is also an inter-process communication mechanism with other communication mechanism is different is that it can be used for interprocess communication between different hosts.

 

Way communication between threads:

The critical area: access via serial multi-threaded public resources or a piece of code, faster speed, for controlling data access;

Mutex Synchronized / Lock: using mutex mechanism, only those with permission mutex thread have access to public resources. Because only a mutex object, it is possible to ensure that public resources can not be accessed simultaneously by multiple threads

Semaphore Semphare: to control the user has a limited number of resources and design, it allows multiple threads at the same time to access the same resources, but in general need to limit access to this resource at a time of maximum number of threads.

Event (signal), Wait / Notify: to keep the multi-thread synchronization operation by way of notification, but also facilitate the realization of multi-thread priority of the compare operations

 

Please talk about Linux virtual address space,

 

In order to prevent the same time different processes running in physical memory and physical memory contention and trampled, using virtual memory.

Virtual memory technology allows different processes during operation, they see their own 4G memory alone took possession of the current system. All processes share the same physical memory, each process only needs to present his own virtual memory space is mapped and stored on the physical memory. In fact, each process is created to load the kernel just for the process "to create" virtual memory layout is specific initialization process control table memory associated with the list, in fact, does not immediately put the virtual memory location corresponding to the program data and code (such as .text .data section) copied to physical memory, just a mapping between a good memory and virtual disk files like (called a memory map), until running into a corresponding program, will be missing page by abnormal to copy data. There are processes running in the process, to dynamically allocated memory, such as when malloc, only allocated virtual memory, virtual memory, this is the corresponding page table entry is set accordingly, when the process actually access this data, it had resulted in the lack of page exception.

Demand paging system, sub-system request and the request is for a period paged virtual memory system, memory and external memory information is replaced by that request.

 

Please talk about concurrency (concurrency) and parallel (parallelism)

Concurrency (concurrency): refers to the macro look at the two programs running simultaneously, such as multi-tasking on a single core cpu's. But the micro level instruction program is intertwined with two runs, interspersed with my instructions between your instructions, interspersed with you between my command, run a command only in a single cycle. This concurrency does not improve your computer's performance can only improve efficiency.

Parallel (parallelism): refers to run in the strict physical sense, such as multicore cpu, two programs are running on two nuclei, affecting each other, within a single period of each program are running their own instructions, that is, run two instructions. This shows that the parallel indeed improve the efficiency of the computer. So now to the multi-core cpu are developing.

 

Will write multithreaded programs on a single core machine, need to consider whether the lock, and why?

Reference answer:

Write multithreaded programs on a single core machine, still you need to thread lock. Commonly used as thread-locking thread synchronization and communication. Multithreaded programs on a single core machine, thread synchronization problem still exists. Since the preemptive operating system, a time slice allocated for each thread generally, when a thread time slice is exhausted, the operating system will suspend and then run another thread. If these two threads share some data, without using a thread lock may result in the shared data modification conflict.

 

Make your way synchronization between threads to talk about the best state the specific system calls

Semaphores, mutexes, condition variables

 

Game servers should open up a thread or a process for each user, and why?

Game servers should open up a process for each user. Because the thread between the same process will affect each other, a dead thread will affect the other thread, causing the process to crash. Therefore, in order to ensure that no interaction between different users, it should open up a process for each user

 

May I ask how inter-process communication

Interprocess communication including piping, the IPC system (including message queues, semaphores, signals, shared memory, etc.), and the socket socket.

 

Please talk about your condition deadlock occurs and how to resolve a deadlock

Reference answer:

Deadlock is a phenomenon of two or more processes in the implementation process, due to lower competition for resources caused by waiting for each other. Four necessary conditions for deadlock occurs as follows:

Mutually exclusive conditions: the process does not allow the resources allocated to other processes to access, if other processes to access the resource, can only wait until after the release of the resource in possession of the resources used by the process is completed;

Request and kept Condition: The process gets some resources, but also makes a request for additional resources, but the resources may be occupied by another process, this time blocking the request, but the process does not release the resources they have occupied

Inalienable condition: Resources process has been obtained before using unfinished, inalienable, our only released after use

Loop waiting Condition: The process deadlock, there must be a process - endless chain between resources

The method of deadlocks, i.e. one of the four conditions for the above damage, mainly as follows:

Time allocation of resources, and thereby denying the request holding conditions

Be deprived of resources: that is, when the process of new resources have not been met, the release of resources have been occupied, thereby undermining the inalienable condition

Ordered resource allocation method: system for each type of resource is assigned a serial number, each process requests resources by increasing numbers, released on the contrary, in order to undermine the loop condition waiting

 

You talk about the process of state transition diagrams, dynamic ready, ready static, dynamic blocking, static obstruction

 

Guess you like

Origin blog.csdn.net/wwxy1995/article/details/94365998