Selected interview questions and reference answers for the latest ByteDance social recruitment

  1. How C++ smart pointers solve the memory leak problem.
    1. shared_ptr shared smart pointer

std::shared_ptr uses reference counting, and every copy of shared_ptr points to the same memory. The memory will be released when the last shared_ptr is destroyed.

The shared_ptr can be initialized through the constructor, std_make_shared auxiliary function, and reset method:

// Constructor initialization

std::shared_ptrp ( new int(1) ) ;

std::shared_ptrp2 = p ;

// For an uninitialized smart pointer, it can be initialized by the reset method.

std::shared_ptrptr; ptr.reset ( new int (1) ) ;

if (ptr) {cout << “ptr is not null.\n” ; }

You cannot directly assign a raw pointer to a smart pointer:

std::shared_ptrp = new int(1) ;// Compile error, direct assignment is not allowed

Get raw pointer:

Return the original pointer through the get method

std::shared_ptrptr ( new int(1) ) ;

int * p =ptr.get () ;

Pointer deleter:

Smart pointer initialization can specify deleter

void DeleteIntPtr ( int * p ) {

delete p ;

}

std::shared_ptrp ( new int , DeleteIntPtr ) ;

When the reference technique of p is 0, the deleter is automatically called to release the memory of the object. The deleter can also be a lambda expression, for example

std::shared_ptrp ( new int , [](int * p){delete p} ) ;

Precautions:

(1). Don't initialize multiple shared_ptrs with one raw pointer.

(2). Don't create shared_ptr in the actual parameters of the function, define and initialize it before calling the function.

(3). Don't return this pointer as shared_ptr.

(4). To avoid circular references.

2. Unique_ptr exclusive smart pointer

Unique_ptr is an exclusive smart pointer. It does not allow other smart pointers to share its internal pointers, and it is not allowed to assign a unique_ptr to another unique_ptr through assignment.

The unique_ptr does not allow copying, but it can be returned to other unique_ptr through the function, and it can also be transferred to other unique_ptr through std::move, so that it no longer owns the original pointer.

If you want only one smart pointer to manage resources or manage an array, use unique_ptr. If you want multiple smart pointers to manage the same resource, use shared_ptr.

3.Weak_ptr weak reference smart pointer

The weakly referenced smart pointer weak_ptr is used to monitor shared_ptr, and does not increase the reference count by one. It does not manage the pointers inside shared_ptr, mainly to monitor the life cycle of shared_ptr, and is more like an assistant to shared_ptr.

Weak_ptr does not have overloaded operators * and -> because it does not share pointers and cannot manipulate resources. It is mainly used to obtain resource monitoring rights through shared_ptr. Its construction does not increase the reference count, and its destruction does not reduce the reference count. , Purely as a bystander to monitor the existence of the Guanlide resource in shared_ptr.

weak_ptr can also be used to return this pointer and solve the problem of circular references.

  1. The difference between soft link and hard link in linux. In
    principle, the inode node number of the hard link and the source file are the same, and the two are hard links to each other. The inode node numbers of the soft link and the source file are different, and the block they point to is also different. The path name of the source file is stored in the soft link block.

In fact, the hard link and the source file are the same file, while the soft link is a separate file, similar to a shortcut, storing the location information of the source file for easy pointing.

In terms of usage restrictions, you cannot create hard links to directories, you cannot create hard links to different file systems, and you cannot create hard links to non-existent files; you can create soft links to directories, and you can create soft links across file systems.

Create a soft link to a file that does not exist.

  1. What is the congestion control mechanism of TCP? Please briefly tell
    us that we know that TCP samples RTT and calculates RTO through a timer. However, if the delay on the network suddenly increases, then the only response to this by TCP is to retransmit the data, but retransmission It will cause a heavier burden on the network, which will lead to greater delays and more packet loss, which leads to a vicious circle, and ultimately forms a "network storm"-TCP's congestion control mechanism is used to deal with this situation.

First, we need to understand a concept. In order to adjust the amount of data to be sent at the sender, a "Congestion Window" is defined. When sending data, the size of the congestion window is compared with the window size of the receiver's ack. The smaller is used as the upper limit of the amount of data sent.

Congestion control is mainly four algorithms:

1. Slow start: It means that the connection that just joined the network, speed up little by little, don't fill up the road as soon as it comes up.

After the connection is established, cwnd = 1 is initialized first, indicating that a data of MSS size can be transmitted.

Whenever an ACK is received, cwnd++; rises linearly

Whenever an RTT has passed, cwnd = cwnd*2; increases exponentially

The threshold ssthresh (slow start threshold) is an upper limit. When cwnd >= ssthresh, it will enter the "congestion avoidance algorithm"

2. Congestion avoidance: When the congestion window cwnd reaches a threshold, the window size no longer increases exponentially, but increases linearly to avoid network congestion caused by excessive growth.

Whenever an ACK is received, cwnd = cwnd + 1/cwnd

Whenever an RTT has passed, cwnd = cwnd + 1

Congestion occurs: When packet loss occurs for data packet retransmission, it means that the network is congested. There are two cases for processing:

Wait until the RTO expires and retransmit the packet

sshthresh = cwnd /2

cwnd reset to 1

3. Enter the slow start process

Retransmission is enabled when 3 duplicate ACKs are received, instead of waiting for RTO timeout

sshthresh = cwnd = cwnd /2

Enter the fast recovery algorithm-Fast Recovery

4. Fast recovery: At least 3 Duplicated Acks have been received, indicating that the network is not so bad and can be recovered quickly.

cwnd = sshthresh + 3 * MSS (3 means to confirm that 3 packets have been received)

Retransmit the data packet specified by Duplicated ACKs

If you receive duplicated Acks again, then cwnd = cwnd +1

If a new Ack is received, then cwnd = sshthresh, and then enter the congestion avoidance algorithm.

  1. How to understand the three mechanisms of IO multiplexing Select, Poll, Epoll?
    1.Select

First analyze the select function

int select(

int maxfdp1,

fd_set *readset,

fd_set *writeset,

fd_set *exceptset,

const struct timeval *timeout

);

【Parameter Description】

int maxfdp1 specifies the number of file description words to be tested, and its value is the maximum description word to be tested plus 1.

fd_set *readset , fd_set *writeset , fd_set *exceptset

fd_set can be understood as a collection, which stores file descriptors, that is, file handles. The three parameters in the middle specify the set of file descriptors that we want the kernel to test for read, write, and exception conditions. If you are not interested in a certain condition, you can set it as a null pointer.

const struct timeval *timeout timeout tells the kernel how long it can take to wait for any one of the specified file descriptor set to be ready. The timeval structure is used to specify the number of seconds and microseconds of this period of time.

【return value】

int returns its number if there is a ready descriptor, 0 if it times out, -1 if there is an error

select operating mechanism

The select() mechanism provides a data structure of fd_set, which is actually an array of long type. Each array element can be associated with an open file handle (whether it is a Socket handle, or another file or named pipe or device handle ) The connection is established, and the work of establishing the connection is completed by the programmer. When select() is called, the kernel modifies the content of fd_set according to the IO state, thereby notifying the process that executed select() which Socket or file is readable.

From the process point of view, there is not much difference between using the select function for IO requests and the synchronous blocking model. There are even additional operations such as adding a monitoring socket and calling the select function, which is less efficient. However, the biggest advantage of using select is that users can process multiple socket IO requests simultaneously in one thread. The user can register multiple sockets, and then continuously call select to read the activated sockets, so that multiple IO requests can be processed simultaneously in the same thread. In the synchronous blocking model, this goal must be achieved through multithreading.

The problem of select mechanism

Every time you call select, you need to copy the fd_set collection from user mode to kernel mode. If the fd_set collection is large, the overhead is also very high.

At the same time, every time you call select, you need to traverse all the fd_set passed in in the kernel. If the fd_set collection is large, the overhead is also very large.

In order to reduce the performance damage caused by data copy, the kernel limits the size of the monitored fd_set collection, and this is controlled by a macro, and the size cannot be changed (limited to 1024).

2.Poll

The mechanism of poll is similar to that of select. It is not much different from select in essence. Managing multiple descriptors is also polling and processing according to the status of the descriptors, but poll has no limitation on the maximum number of file descriptors. In other words, poll only solves the above problem 3, and does not solve the performance overhead of problems 1, 2.

The following is the function prototype of pll:

int poll(struct pollfd *fds, nfds_t nfds, int timeout);

typedef struct pollfd {

int fd; // The file descriptor that needs to be detected or selected

short events; // Events of interest on the file descriptor fd

short revents; // The actual events currently occurring on the file descriptor fd

} pollfd_t;

Poll has changed the description of the file descriptor set, using the pollfd structure instead of the select fd_set structure, making the file descriptor set supported by poll much larger than the select 1024

【Parameter Description】

struct pollfd *fds fds is an array of struct pollfd type, used to store socket descriptors whose status needs to be checked, and the fds array will not be emptied after calling the poll function; a pollfd structure represents a monitored file descriptor, Instruct poll() to monitor multiple file descriptors by passing fds. Among them, the events field of the structure is to monitor the event mask of the file descriptor, which is set by the user, and the revens field of the structure is the event mask of the operation result of the file descriptor. The kernel sets this field when the call returns.

nfds_t nfds records the total number of descriptors in the array fds

【return value】

The int function returns the number of ready read, write, or error descriptors in the fds collection, returning 0 means timeout, and returning -1 means error;

Epoll

epoll was formally proposed in the Linux 2.6 kernel. It is an event-driven I/O method. Compared with select, epoll has no limit on the number of descriptors. One file descriptor is used to manage multiple descriptors and describe the files that users care about. The event of the symbol is stored in an event table of the kernel, so that only one copy is needed in the user space and the kernel space.

The epoll related functions provided in Linux are as follows:

int epoll_create(int size);

int epoll_ctl(

int epfd,

int op, int fd,

struct epoll_event *event

);

int epoll_wait(

int epfd,

struct epoll_event * events,

int maxevents,

int timeout

);

1). The epoll_create function creates an epoll handle, and the parameter size indicates the number of descriptors that the kernel wants to monitor. An epoll handle descriptor is returned when the call succeeds, and -1 is returned when it fails.

2). The epoll_ctl function registers the event type to be monitored. The four parameters are explained as follows:

epfd means epoll handle

op represents the type of fd operation, there are 3 types as follows

EPOLL_CTL_ADD Register new fd to epfd

EPOLL_CTL_MOD modify the listening event of the registered fd

EPOLL_CTL_DEL delete an fd from epfd

fd is the descriptor to be monitored

event indicates the event to be monitored

The epoll_event structure is defined as follows:

struct epoll_event {

__uint32_t events; /* Epoll events */

epoll_data_t data; /* User data variable */

};

typedef union epoll_data {

void * ptr;

int fd;

__uint32_t u32;

__uint64_t u64;

} epoll_data_t;

3). The epoll_wait function waits for the event to be ready, returns the number of ready events when it succeeds, returns -1 when the call fails, and returns 0 when the call fails.

⑴epfd is the handle of epoll

⑵events represents the ready event collection obtained from the kernel

⑶maxevents tells the size of the kernel events

⑷timeout represents the waiting timeout event

Epoll is an improved poll made by the Linux kernel to handle a large number of file descriptors. It is an enhanced version of the multiplexed IO interface select/poll under Linux. It can significantly improve the program when there is only a small amount of active in a large number of concurrent connections. The system CPU utilization. The reason is that when getting events, it does not need to traverse the entire listened descriptor set, just traverse the descriptor sets that are asynchronously awakened by the kernel IO event and added to the Ready queue.

In addition to providing level triggers for select/poll IO events (Level Triggered), epoll also provides edge triggers (Edge Triggered), which makes it possible for user-space programs to cache the IO state, reduce epoll_wait/epoll_pwait calls, and improve applications Program efficiency.

⑴Horizontal trigger (LT): The default working mode, that is, when epoll_wait detects that a descriptor event is ready and notifies the application, the application may not immediately process the event; the next time epoll_wait is called, the event will be notified again

⑵Edge trigger (ET): When epoll_wait detects that a descriptor event is ready and informs the application, the application must process the event immediately. If it is not processed, the event will not be notified again when epoll_wait is called next time. (Until you do some operations that cause the descriptor to become not ready, which means that the edge trigger will only notify once when the state changes from not ready to ready).

LT and ET were originally supposed to be used for pulse signals, and may be used to explain more vividly. Level and Edge refer to the trigger point. Level means that as long as it is at the level, it will always trigger, while Edge means it will trigger on rising and falling edges. For example: 0->1 is Edge, 1->1 is Level.

ET mode greatly reduces the number of triggers of epoll events, so the efficiency is higher than that in LT mode.

  1. Linux kernel scheduling talk about
    1.1, scheduling strategy in detail

The definition is located in linux/include/uapi/linux/sched.h

SCHED_NORMAL: ordinary time-sharing process, fair_sched_class scheduling class used

SCHED_FIFO: First-in-first-out real-time process. When the calling program allocates the CPU to a process, it keeps the process descriptor in the current position of the run queue linked list. Once the process of this scheduling strategy uses the CPU, it always runs. If there is no other runnable higher priority real-time process, the process continues to use the CPU as long as you want, even if there are other real-time processes with the same priority in a runnable state. The rt_sched_class scheduling class is used.

SCHED_RR: The real-time process of time slice rotation. When the scheduler assigns the CPU to a process, it puts the descriptor of the process at the end of the run queue linked list. This strategy guarantees fair allocation of CPU time to all SCHED_RR real-time processes with the same priority, using the rt_sched_class scheduling class

SCHED_BATCH: A differentiated version of SCHED_NORMAL. The time-sharing strategy is adopted to allocate CPU resources based on dynamic priority. When there is a real-time process, the real-time process is scheduled first. But for throughput optimization, except that it cannot be preempted, it is the same as regular processes, allowing tasks to run for a longer time, better using cache, suitable for batch processing, using fair_shed_class scheduling class

SCHED_IDLE: the lowest priority, running when the system is idle, using the idle_sched_class scheduling class, used for process 0

SCHED_DEADLINE: Newly supported real-time process scheduling strategy, for burst computing, and use of tasks that are sensitive to delay and completion time, based on EDF (earliest deadline first), using the dl_sched_class scheduling class.

1.2. Scheduling trigger
Insert picture description here
There are two main ways to trigger scheduling, one is to call the scheduler_tick function, and then use task_tick in the scheduling class of the currently running process, and the other is to actively call schedule, no matter which one it is. Eventually, the __schedule function will be called. This function calls pick_netx_task. It is judged by rq->nr_running ==rq->cfs.h_nr_running that if the processes in the current running queue are all in the cfs scheduler, the cfs scheduling class ( This judgment in the kernel code uses Likely to indicate that most of the conditions are met). If the run queues are not all in the cfs, select the next process to be run through the priority stop_sched_class->dl_sched_class->rt_sched_class->fair_sched_class->idle_sched_class traversal. Then process task switching.

Only processes in the TASK_RUNNING state will be selected by the process scheduler, and other states will not enter the scheduler. The timing of system scheduling is as follows:

àWhen calling cond_resched()

àWhen schedule() is called explicitly

àWhen returning from interrupt context

When the kernel enables preemption, there will be several more scheduling opportunities as follows:

àWhen calling preemt_enable() in a system call or interrupt context (multiple calls to the system will only be scheduled at the last call)

àIn the interrupt context, when returning from the interrupt handling function to the preemptible context

2. CFS scheduling

This part of the code is located in linux/kernel/sched/fair.c

The const struct sched_classfair_sched_class is defined, which is the object defined by the scheduling class of CFS. Which basically contains all the implementations of CFS scheduling.

CFS implements three scheduling strategies:

1> SCHED_NORMAL This scheduling strategy is used by regular tasks

2> SCHED_BATCH This strategy is not as frequent preemption as regular tasks, at the cost of sacrificing interactivity, thus allowing tasks to run for a longer time to better utilize the cache, this strategy is suitable for batch processing

3> SCHED_IDLE This is a nice value even weaker than 19, but in order to avoid problems caused by priority, this problem will deadlock the scheduler, so this is not a true idle timing scheduler

CFS scheduling class:

n enqueue_task(...) When the task enters the runnable state, this callback will put the task scheduling entity (entity) into the red-black tree and increase the value of the nr_running variable

n dequeue_task(...) When the task is no longer runnable, this callback will remove the task scheduling entity from the red-black tree and reduce the value of the nr_running variable

n yield_task(...) Unless compat_yield sysctl is turned on, this callback function is basically a dequeue followed by an enqueue. In this case, he puts the task scheduling entity into the rightmost end of the red-black tree

n check_preempt_curr(…) This callback function is to check whether a task entering the runnable state should preempt the currently running task

n pick_next_task(…) This callback function selects the next most suitable task to run

n set_curr_task(…) When the task changes its scheduling class or changes its task group, the callback function will be called

n task_tick(…) This callback function is mostly called by time tick. He may cause process switching. This drives preemption at runtime

2.1, CFS scheduling

If Tcik is interrupted, it will mainly update the scheduling information, and then adjust the position of the current process in the red-black tree. After the adjustment is completed, if the current process is no longer the leftmost leaf, it will be marked as Need_resched. When the interrupt returns, the scheduler() will be called to complete the switch, otherwise the current process will continue to occupy the CPU. It can be seen from this that CFS has abandoned the traditional time slice concept. Tick ​​interruption only needs to update the red-black tree.

The red-black tree key value is vruntime, which is updated by calling the update_curr function. This value is a 64-bit variable, which will always increase. In __enqueue_entity, vruntime will be used as the key value to insert the entity into the queue into the red-black tree. __pick_first_entity will take out the entity with the smallest vruntime on the leftmost side of the red-black tree.

  1. How to calculate the memory occupied by struct?
    1. Each member is aligned according to its type size and the smaller of the specified alignment parameter n

2. The determined alignment parameter must be able to divide the starting address (or offset)

3. Both the offset address and the occupied size of the member need to be aligned

4. The alignment parameter of the structure member is the maximum value of the alignment parameter used by all members

5. The total length of the structure must be an integer multiple of all alignment parameters

#include<stdio.h>

struct test

{

char a;

int b;

float c;

};

int main(void)

{

printf(“char=%d\n”,sizeof(char));

printf(“int=%d\n”,sizeof(int));

printf(“float=%d\n”,sizeof(float));

printf(“struct test=%d\n”,sizeof(struct test));

return 0;

}

The execution result is 1, 4, 4, 12

The calculation process that takes up memory space:

The alignment parameter is 4. Assuming the starting address of the structure is 0x0

The type of a is char, so the memory space occupied is 1 byte, which is smaller than the alignment parameter 4. Therefore, 1 is selected as the alignment number, and the address 0x0 can be divisible by 1, so 0x0 is the starting address of a, which takes up space Is 1 byte;

The type of b is int, and it occupies 4 bytes of memory space, which is the same as the alignment parameter. Therefore, 4 is the alignment number. 0x1 cannot be divisible by 4, so it cannot be used as the starting address of b. Choose 0x4 as the starting address of b, so there will be 3 bytes of free space in the middle;

The type of c is float, which occupies 4 bytes, so 4 is the alignment number, 0x8 can be divisible by 4, so the starting address of c is 0x8

Therefore, the memory size occupied by the entire structure is 12 bytes.

  1. Why does mysql use B+ tree as an index?
    Features of b tree:

An M-level b-tree has the following characteristics: (M=3 in the figure below) (The keywords below can be understood as valid data, rather than simple indexes)

Define that any non-leaf node has at most M children, and M>2;

The number of sons of the root node is [2, M];

The number of sons of non-leaf nodes other than the root node is [M/2, M], rounded up; (number of sons: [2,3])

The number of keywords of non-leaf nodes = the number of sons-1; (keywords=2)

All leaf nodes are on the same layer;

The k keywords divide the node into k+1 segments, which point to k+1 sons respectively, and satisfy the size relationship of the search tree. (k=2)

Regarding some characteristics of the b-tree, pay attention to distinguish it from the following b+ tree:

The key set is distributed in the whole tree;

Any one keyword appears and only appears in one node;

The search may end at non-leaf nodes;

Its search performance is equivalent to a binary search in the complete set of keywords;

Insert picture description here
The b+ tree is a variant of the b tree with better query performance. Features of m-order b+ tree:

There are n keywords in the non-leaf nodes of n subtrees (b-tree is n-1). These keywords do not store data, but are only used for indexing. All data is stored in leaf nodes (b-tree is each Keywords are saved data).

All leaf nodes contain information about all keywords and pointers to records containing these keywords, and the leaf nodes themselves are linked in order of the size of the keywords from small to large.

All non-leaf nodes can be regarded as index parts, and the nodes only contain the largest (or smallest) key in its subtree.

Usually there are two head pointers on the b+ tree, one to the root node and one to the leaf node with the smallest key.

The same number will appear repeatedly in different nodes, and the largest element of the root node is the largest element of the b+ tree.

Insert picture description here
The reasons for choosing B+ tree as the index structure of the database are:

The intermediate nodes of the B+ tree do not store data and are pure indexes, but the intermediate nodes of the B tree store data and indexes. Relatively speaking, the B+ tree disk page can hold more node elements and is more "short and fat";

The B+ tree query must find the leaf nodes. As long as the B tree is matched, it does not care about the element position, so the b+ tree search is more stable (not slow);

For range search, B+ tree only needs to traverse the linked list of leaf nodes, but B-tree needs to be traversed repeatedly in order. Range search is very common in projects.

When adding and deleting files (nodes), the efficiency is higher, because the leaf nodes of the B+ tree contain all keywords and are stored in an ordered linked list structure, which can improve the efficiency of addition and deletion.

  1. Please explain why the Weibo system crashed when Lu Han posted his relationship, how to solve it?
    Lu Han is first of all a star, a traffic star. He has a large number of fans, so he has announced his love affair, and the instant flow is huge. But we must note that there is a problem. It is this instantaneous increase in traffic that not only increases the number of views. If it's just for reading, we only need to put Lu Han's Weibo into the Redis cache. With Weibo technology, it is impossible to hang it.

The reason why this Weibo hangs up is because during this time period, the amount of forwarding + comments is very large, not only the amount of reading.

In addition, there will be a message push function for celebrities' Weibo. Hot data in the first time, as long as you have an Internet connection, you can receive pushes.

The final summary is as follows:

  1. Get Weibo through pull or push

  2. The frequency of posting Weibo is much less than reading Weibo

  3. Weibo by traffic stars is treated differently from ordinary blogs. For example, when sharding, this factor should also be considered

9. Find the longest path length of any binary tree
Example: as shown belowInsert picture description here

The latest ByteDance Social Recruitment Interview Questions and Reference Answers
. The longest path length of the tree in Figure 1 is 4, the longest path length in Figure 2 is 7, and the longest path in Figure 1 passes through the root node and the vertex is 1, and Figure 2 does not After passing, vertex is 3

Ideas:

Between any two nodes in the tree, the connecting path is the longest. The method is to find the height of the left subtree and the right subtree of each node, the sum of the two is the longest path of the current node, and then compare the longest path of each node, the largest is the result

Implementation:

Define a static variable MaxLength to record the maximum length of each step, and take the pre-order traversal to traverse each node. During the traversal process, compare the longest path of the current node. For the longest path of each node, first find it. The height of the subtree and the right subtree (the path with the largest number of nodes), and then add it to the longest path of the current node

static Integer MaxLength=0;//Record the longest path

//Traverse the entire tree to get the longest path

public void getLength(TreeNode t){

if(t!=null){

MaxLength=Math.max(LengthTree(t),MaxLength);

getLength(t.lchild);

getLength(t.rchild);

}

}

//Get the longest path of the current node

public int LengthTree(TreeNode t){

if (t==null)

return 0;

int left=heighTree(t.lchild);

int right=heighTree(t.rchild);

int CurMax=left+right;

return CurMax;

}

//Find the maximum height of the binary tree

public int heighTree(TreeNode t){

if (t==null)

return 0;

else

return Math.max(heighTree(t.lchild),heighTree(t.rchild))+1;

}

  1. Have you ever understood the network IO in redis, is it single-threaded or multi-threaded, why use single-threaded?
    Redis uses network IO multiplexing technology to ensure the high throughput of the system when multiple connections are made.

Multiplexing-refers to multiple socket connections, multiplexing-refers to multiplexing a thread. There are three main technologies for multiplexing: select, poll, and epoll. Epoll is the latest and best multiplexing technology.

Here "multiplex" refers to multiple network connections, and "multiplex" refers to multiplexing the same thread. Using multiple I/O

Multiplexing technology allows a single thread to efficiently process multiple connection requests (to minimize the time consumption of network IO), and Redis operates data in memory very fast (in-memory operations will not become the performance bottleneck here), mainly The above two points make Redis have high throughput.
Insert picture description here
Because Redis is a memory-based operation, the CPU is not the bottleneck of Redis. The bottleneck of Redis is most likely the size of the machine's memory or network bandwidth. Since single thread is easy to implement, and CPU will not become a bottleneck, single thread is used

Due to limited space, I will share it here today. If you need more interview questions from big companies, you can add Q group 563998835 to receive
Insert picture description here
Insert picture description here

Guess you like

Origin blog.csdn.net/qq_40989769/article/details/105435840
Recommended