I'm confused...Meituan asks for 44 questions, after which it's 60W+

say up front

In the (50+) reader community of Nien, a 40-year-old architect , there are often small partners who need to interview big companies such as Meituan, JD.com, Ali, Baidu, and Toutiao.

The following is a small partner who successfully passed a technical interview of Meituan. In the end, the small partner passed several technical tortures and soul tortures, and finally got an offer.

Judging from these topics: Meituan's interviews focus on low-level knowledge and principles, let's take a look.

Now put the real interview questions and reference answers into our collection, let’s take a look, what do you need to learn to receive an offer from Meituan?

Of course, for intermediate and advanced development, these interview questions also have reference significance.

Here, the questions and reference answers are included in our " Nin Java Interview Collection " V84 version, for the reference of later friends, to improve everyone's 3-high architecture, design, and development levels.

PDF of "Nin Architecture Notes", "Nin High Concurrency Trilogy" and "Nin Java Interview Collection", please go to the official account [Technical Freedom Circle] to get it

Article directory

Meituan's 44 questions

1. Talk about the Java memory area and memory model

Java memory regions and memory model are different things.

Memory area : When the JVM is running, data is stored in regions, emphasizing the division of memory space.

Memory Model (Java Memory Model, JMM for short) : defines the abstract relationship between threads and main memory, that is, JMM defines how the JVM works in computer memory (RAM).

1. Memory area

The following figure is the JVM runtime data area distribution before JDK 1.8:

The following figure is the JVM runtime data area distribution after JDK 1.8:

By comparing the JVM runtime data area distribution before JDK 1.8 and after JDK 1.8, we can find that the difference is that 1.8 has a metaspace alternative method area . The following 元空间章节explains why the method area is replaced .

Below we introduce what each area does for the JVM memory distribution map after JDK 1.8.

native method stack

Native Method Stacks : It serves the Native method used by the virtual machine. It can be considered as a JNIdirect call to the local C/C++ library through (Java Native Interface), which is not controlled by the JVM.

We often use 获取当前时间毫秒Native local methods, which are nativemodified by keywords.

package java.lang;

public final class System {
    
    
    public static native long currentTimeMillis();
}

In fact, it is to solve some problems that Java itself cannot do, but C/C++ can, expand the use of Java through JNI, and integrate different programming languages.

program counter

Program Counter Register : A small memory space, its function can be seen as the line number indicator of the bytecode executed by the current thread. Since the JVM can execute threads concurrently, a program counter is assigned to each thread, which is the same as the life cycle of the thread. Therefore, there will be switching between threads, and at this time, the program counter will record the location where the current program is executed, so that after other threads are executed, the scene can be resumed to continue execution.

If the thread is executing a Java method, this counter records the address of the virtual machine bytecode instruction being executed; if the thread is executing a Native method, the value of the counter is empty (undefined)

This memory area is an area 唯一一个where nothing is specified in the Java Virtual Machine Specification OutOfMemoryError.

Java virtual machine stack

Java Virtual Machine Stacks : Like the program counter, the Java virtual machine stack is also thread-private, and its life cycle is the same as the thread.

The description of the virtual machine stack is Java方法执行的内存模型: when each method is executed, a stack frame (Stack Frame) will be created at the same time to store 局部变量表, 操作栈, 动态链接, 方法出口and other information. The process of each method being called until the execution is completed corresponds to the process of a stack frame from pushing to popping in the virtual machine stack.

The stack is an ordered list of FILO-First In Last Out. In the active thread, only the stack frame at the top of the stack is valid, called the current stack frame . The method being executed is called the current method, and the stack frame is the basic structure for the method to run. When the execution engine is running, all instructions can only operate on the current stack frame.

local variable table

The local variable table is an area for storing method parameters and local variables . Local variables have no preparation phase and must be explicitly initialized. Global variables are placed on the heap, and there are two stages of assignment, one is in the preparation stage of class loading, and the initial value of the system is assigned; the other is in the initialization stage of class loading, and the initial value defined by the code is assigned.

operation stack

The operation stack is a bucket structure stack (first in, last out) with an initial state of empty. When a method just starts to execute, the operand stack of this method is empty. During the execution of the method, various bytecode instructions will write and extract content from the operand stack, that is, pop/ Push operation.

dynamic link

Each stack frame contains a reference to the current method in the runtime constant pool, the purpose is to support the dynamic connection of the method call process. If other methods need to be called in the current method, the corresponding symbol reference can be found from the runtime constant pool, and then the symbol reference can be converted into a direct reference, and then the corresponding method can be called directly.

Not all method calls need to be dynamically linked. Some symbolic references will be 类加载解析阶段converted into direct references. This part of the operation is called: 静态解析, which means that the version of the call can be determined during compilation, including: calling static methods, calling instances private constructor, private method, superclass method.

method return address

There are two exit situations when the method is executed:

  1. Normal exit, that is, normal execution to return bytecode instructions of any method, such as RETURN, IRETURN, ARETURN, etc.;
  2. Abnormal exit.

Regardless of the exit condition, returns to the point where the method was currently called. The process of method exit is equivalent to popping the current stack frame.

heap

99% of the GC tuning/JVM tuning we often talk about refers to both 调堆! Java stacks, native method stacks, program counters, etc. generally do not generate garbage.

Heap : The largest piece of memory managed by the Java virtual machine. The Java heap is a memory area shared by all threads and created when the virtual machine starts. The sole purpose of this memory area is to store object instances, and almost all object instances allocate memory here.

The heap is the main area managed by the garbage collector, so it is often called "GC heap" (Garbage Collected Heap).

Metaspace

JDK 1.8 changed the method area to metaspace. The meta information of the class is stored in the meta space, which does not use heap memory, but a local memory area that is not connected to the heap . Therefore, theoretically, as much memory as the system can use, there is as much metaspace as possible.

method area

Method Area : Like the Java heap, it is a memory area shared by each thread . It is used to store data such as class information, constants, static variables, and code compiled by the just-in-time compiler that have been loaded by the virtual machine.

Although the Java virtual machine specification describes the method area as a logical part of the heap, it has an alias called Non-Heap (non-heap), which should be distinguished from the Java heap.

Changes in Method Area and Metaspace

The figure below shows the general transition process of the method area from JDK 1.6, JDK 1.7 to JDK 1.8:

In JDK 1.8, the HotSpot JVM moved out of the permanent generation (PermGen) and used the metaspace (Metaspace) at the beginning. The main reasons for using metaspace instead of permanent generation implementation are as follows:

  1. Avoid OOM exceptions, strings exist in the permanent generation, prone to performance problems and memory overflow;
  2. It is difficult to determine the size of the permanent generation setting space. If it is too small, permanent generation overflow will easily occur, and if it is too large, it will easily lead to old generation overflow;
  3. It is very difficult to tune the permanent generation;
  4. Combine HotSpot and JRockit into one;

2. Memory model

The memory model is to ensure the correctness (visibility, order, and atomicity) of shared memory. The memory model defines the specification of the read and write operation behavior of multi-threaded programs in the shared memory system.

The memory model solves concurrency problems in two main ways: limiting processor optimization and using memory barriers .

The Java Memory Model (JMM) controls communication between Java threads, determining when one thread's writes to shared variables are visible to another thread.

Computer cache and cache coherency

Computers use cache memory between a high-speed CPU and a relatively slow storage device as a buffer between memory and the processor. When the program is running, it will copy the data required for the operation from the main memory (the physical memory of the computer) to the cache of the CPU, then the CPU can directly read the data from its cache and write to it when performing calculations. Data is written in it, and after the operation is completed, the data in the cache is refreshed to the main memory.

In a multi-processor system (or a single-processor multi-core system), each processor core has its own cache, and they share the same main memory (Main Memory). When the CPU wants to read a piece of data, it first looks it up from the first level cache, if it is not found, then it looks it up from the second level cache, if it is still not there, it looks it up from the third level cache ( ) 不是所有 CPU 都有三级缓存or memory.

In a multi-core CPU, each core is in its own cache, and the cache content for the same data may be inconsistent. To this end, each processor needs to follow some protocols when accessing the cache, and operate according to the protocol when reading and writing to maintain the consistency of the cache.

JVM main memory and working memory

The Java memory model stipulates that all variables are stored in the main memory 主内存, and each thread has its own 工作内存. All operations on variables by the thread must be performed in the working memory, and the variables in the main memory cannot be directly read and written.

The working memory here is an abstract concept of JMM, which stores the thread memory 读 / 写共享变量的副本.

reorder

In order to improve performance when executing a program, compilers and processors often reorder instructions. There are three types of reordering:

  1. Compiler optimized reordering : The compiler can rearrange the execution order of statements without changing the semantics of single-threaded programs.
  2. Reordering of instruction-level parallelism : Modern processors use instruction-level parallelism (Instruction-Level Parallelism, ILP) to overlap and execute multiple instructions. If there is no data dependency, the processor can change the execution order of the corresponding machine instructions of the statement.
  3. Reordering of the memory system : Due to the processor's use of caches and read/write buffers, it is possible for load and store operations to appear to be performed out of order.

From the java source code to the final actual execution sequence of instructions, the following three reorderings will be performed respectively:

JMM is a language-level memory model that ensures consistent memory visibility guarantees for programmers by prohibiting specific types of compiler reordering and processor reordering on different compilers and processor platforms.

The Java compiler's prohibition of processor reordering is 在生成指令序列的适当位置会插入内存屏障achieved through instructions (when reordering, the following instructions cannot be reordered to the position before the memory barrier).

happens-before

The Java memory model proposes the concept of happens-before, through which the memory visibility between operations is explained. The "visibility" here means that when a thread modifies the value of this variable, the new value can be immediately known to other threads.

If the result of an operation execution needs to be visible to another operation, then there must be a happens-before relationship between the two operations. The two operations mentioned here can be within one thread or between different threads.

Implementation of the Java Memory Model

A series of keywords related to concurrent processing are provided in Java, such as volatile, synchronized, final, JUC 包and so on. In fact, these are some keywords provided to programmers after the Java memory model encapsulates the underlying implementation.

atomicity

In order to ensure atomicity, two high-level bytecode instructions monitorenterand monitorexit, these two bytecodes, are the corresponding keywords in Java synchronized.

We synchronizedare all familiar with keywords, you can compile the following code into a class file, use to javap -v SyncViewByteCode.classview the bytecode, you can find monitorenterand monitorexitbytecode instructions.

public class SyncViewByteCode {
    
    
  public synchronized void buy() {
    
    
    System.out.println("buy porsche");
  }
}

Bytecode, some results are as follows:

 public com.dolphin.thread.locks.SyncViewByteCode();
    descriptor: ()V
    flags: ACC_PUBLIC
    Code:
      stack=1, locals=1, args_size=1
         0: aload_0
         1: invokespecial #1                  // Method java/lang/Object."<init>":()V
         4: return
      LineNumberTable:
        line 3: 0
      LocalVariableTable:
        Start  Length  Slot  Name   Signature
            0       5     0  this   Lcom/dolphin/thread/locks/SyncViewByteCode;

  public void test2();
    descriptor: ()V
    flags: ACC_PUBLIC
    Code:
      stack=2, locals=3, args_size=1
         0: aload_0
         1: dup
         2: astore_1
         3: monitorenter
         4: aload_1
         5: monitorexit
         6: goto          14
         9: astore_2
        10: aload_1
        11: monitorexit
        12: aload_2
        13: athrow
        14: return
      Exception table:

visibility

The Java memory model is implemented by 变量修改后relying on main memory 变量读取前as a transmission medium to synchronize new values ​​back to main memory and to refresh variable values ​​from main memory.

The keyword in Java volatileprovides a function, that is, the variable modified by it can be synchronized to the main memory immediately after being modified, and the variable modified by it is refreshed from the main memory before each use. Therefore, you can use volatileto ensure the visibility of variables in multi-threaded operations.

In addition , the and two keywords volatilein Java can also achieve visibility. It's just that the implementation is different.synchronizedfinal

orderliness

In Java, you can use synchronizedand volatileto ensure the order of operations between multiple threads. The implementation is different:

  • volatile: Keyword prohibits instruction rearrangement.
  • synchronized: The keyword guarantees that only one thread is allowed to operate at the same time.

2. What is a distributed system?

A distributed system is a network system composed of multiple computers that work together through message passing or shared storage to achieve a common goal. The design goal of a distributed system is to distribute computation and data across multiple nodes to provide higher performance, scalability, and reliability.

In a distributed system, each node can perform tasks independently, and communicate and coordinate with each other through communication protocols. These nodes can be computers that are physically distributed in different geographical locations, or virtual machines, containers, or cloud service instances.

Characteristics of a distributed system include:

  1. Parallel processing : Distributed systems can process multiple tasks at the same time, and improve the processing capacity of the system by assigning work to different nodes for parallel execution.
  2. Scalability : The distributed system can increase or decrease nodes according to demand to adapt to changes of different scales and loads. By adding more nodes, the system can handle more requests and provide higher performance.
  3. Fault tolerance : Distributed systems can improve system reliability and fault tolerance through redundancy and backup mechanisms. When a node fails, the system can continue to run, and other nodes can take over the work of the failed node.
  4. Data sharing and coordination : Distributed systems realize data sharing and coordination between nodes through shared storage or message passing. This allows different nodes to share data, state, and resources, and collaborate to complete complex tasks.

The design and management of distributed systems need to take into account the challenges of network communication, consistency, concurrency control, fault handling, etc. At the same time, distributed systems also provide higher flexibility and reliability, and are widely used in various fields, such as cloud computing, big data processing, and the Internet of Things.

3. What aspects would you consider in a distributed system?

When designing and managing a distributed system, the following aspects need to be considered:

  1. Reliability and Fault Tolerance : Distributed systems should be fault tolerant and able to continue normal operation in the event of node failure or network outage, etc. This can be achieved through redundant backup, failure detection and automatic recovery mechanisms.
  2. Scalability : Distributed systems should have good scalability, and can increase or decrease nodes according to demand to adapt to changes in different scales and loads. This can be achieved through horizontal scaling and vertical scaling.
  3. Data consistency : In a distributed system, since data is distributed on multiple nodes, it is necessary to ensure data consistency. This can be achieved through consensus protocols (such as Paxos, Raft) and replica synchronization mechanisms.
  4. Communication and protocol : Nodes in a distributed system need to communicate and coordinate work, so it is necessary to choose a suitable communication protocol and message delivery mechanism. Common communication protocols include TCP/IP, HTTP, RPC, etc.
  5. Load balancing : In order to improve the performance and availability of the system, it is necessary to distribute the load evenly among the nodes. Load balancing can be achieved through request scheduling algorithms and dynamic load balancing strategies.
  6. Security : Distributed systems need to protect the confidentiality, integrity, and availability of data, so security measures such as authentication, data encryption, access control, etc. need to be considered.
  7. Monitoring and diagnosis : The distributed system should have monitoring and diagnosis functions, which can monitor the operating status and performance indicators of the system in real time, and find and solve problems in time.
  8. Deployment and management : The deployment and management of distributed systems need to consider issues such as node configuration, software installation and update, and version control. Automated deployment and management tools can improve efficiency and reliability.
  9. Data backup and recovery : In order to cope with node failure or data loss, data backup and recovery are required. Common backup strategies include cold backup, hot backup, and incremental backup.
  10. Performance optimization : The performance optimization of distributed systems involves various aspects, including algorithm optimization, data structure design, concurrency control, caching strategy, etc.

In summary, designing and managing distributed systems requires a comprehensive consideration of reliability, scalability, data consistency, communication and protocols, load balancing, security, monitoring and diagnostics, deployment and management, data backup and recovery, and performance optimization and other issues.

4. Why is it said that the TCP/IP protocol is unreliable?

The TCP/IP protocol is considered a reliable protocol because it provides many mechanisms to ensure reliable transmission of data.

However, sometimes people say that the TCP/IP protocol is unreliable, because it may not meet the user's requirements or have some problems under certain circumstances.

Here are some situations that can cause the TCP/IP protocol to be called unreliable:

  1. Packet loss : During network transmission, data packets may be lost due to network congestion, equipment failure, or other reasons. Although the TCP protocol has a retransmission mechanism, in some cases, lost data packets may not be retransmitted in time, resulting in unreliability of data transmission.
  2. Latency : The TCP/IP protocol uses congestion control mechanisms to ensure network stability and fairness. This means that data transfers may be delayed during times of network congestion. This delay may be considered unreliable for certain real-time applications, such as online gaming or video calls.
  3. Sequence problem : The TCP protocol guarantees the in-order transmission of data packets, but in some cases, the order of data packets may be disrupted. For example, when data packets travel on different paths in a network, they may arrive at their destination in a different order. This can lead to reassembly and sequencing problems of data packets, thereby affecting the reliability of the data.

It should be noted that although the TCP/IP protocol may be problematic in some cases, it is still one of the most commonly used protocols on the Internet and is widely used in various applications and services. In addition, the TCP/IP protocol can also be configured and optimized to improve its reliability and performance.

5. What are the seven layers of the OSI model? Which four-layer model is TCP/IP?

1. What is OSI?

The OSI model (Open System Interconnection model) is a conceptual model proposed by the International Organization for Standardization, which attempts to provide a standard framework for interconnecting various computers and networks worldwide.
It divides the computer network architecture into seven layers, and each layer can provide a well-abstracted interface.

It can be divided into seven layers from top to bottom: each layer completes specific functions, provides services for the upper layer, and uses the services provided by the lower layer.

  • Physical layer :
    The physical layer is responsible for finally encoding information into current pulses or other signals for online transmission;
    eg: RJ45, etc. convert data into 0 and 1;
  • Data Link Layer :
    The data link layer provides data transmission over physical network links. Different data link layers define different network and protocol features, including physical addressing, network topology, error checking, data frame sequence, and flow control; it can be simply understood as: specifying 0 and
    1 packetization Form, which determines the form of the network packet;
  • Network layer :
    The network layer is responsible for establishing a connection between the source and the destination;
    it can be understood that the location of the computer needs to be determined here, how to determine it? IPv4, IPv6!
  • Transport layer :
    The transport layer provides reliable end-to-end network data flow services to the upper layers.
    It can be understood as: each application will register a port number on the network card, and this layer is the communication between ports! Commonly used (TCP/IP) protocols;
  • Session layer :
    The session layer establishes, manages and terminates the communication session between the presentation layer and the entity;
    establishes a connection (automatic mobile phone information, automatic network addressing);
  • Presentation layer :
    The presentation layer provides a variety of functions for application layer data encoding and conversion to ensure that information sent by one system application layer can be recognized by another system application layer; it can be understood as: solving communication between different systems,
    eg : QQ under Linux and QQ under Windows can communicate;
  • Application layer :
    OSI's application layer protocol includes file transfer, access and management protocol (FTAM), file virtual terminal protocol (VIP) and public management system information (CMIP), etc.; specifies the data transmission protocol
    ;

2. What is the TCP/IP four-layer model?

Application layer (Application): provide users with various services they need

Transport layer (Transport): Provides end-to-end communication functions for application layer entities, ensuring the sequential transmission of data packets and data integrity

Internet layer (Internet): mainly solves the communication problem from host to host

Network Interface Layer (Network Access): responsible for monitoring the exchange of data between the host and the network

3. The relationship between the OSI seven-layer network model and the TCP/IP four-layer network model:

OSI introduces the concepts of service, interface, protocol, and layering, and TCP/IP draws lessons from these concepts of OSI to establish the TCP/IP model.

OSI has models first, then protocols, and standards first, and then practices; on the contrary, TCP/IP has protocols and applications first, and then proposes models, which are the OSI models for reference.

OSI is a theoretical model, while TCP/IP has been widely used and has become the de facto standard for network interconnection.

OSI seven-layer network model TCP/IP four-layer conceptual model Corresponding network protocol
Application layer (Application) application layer HTTP、TFTP, FTP, NFS, WAIS、SMTP
Presentation Telnet, Rlogin, SNMP, Gopher
Session layer (Session) SMTP, DNS
Transport layer (Transport) transport layer TCP, UDP
Network layer (Network) Network layer IP, ICMP, ARP, RARP, AKP, UUCP
Data Link Layer (Data Link) data link layer FDDI, Ethernet, Arpanet, PDN, SLIP, PPP
Physical layer IEEE 802.1A, IEEE 802.2 to IEEE 802.11

4. The difference between OSI seven layers and TCP/IP

  • TCP/IP is a protocol cluster; OSI (Open Systems Interconnection) is a model, and TCP/IP was developed before OSI.
  • TCP/IP is a layered protocol made of some interactive modules, each of which provides specific functions; OSI specifies which functions belong to which layer.
  • TCP/IP is a four-layer structure, while OSI is a seven-layer structure. The top three layers of OSI are represented by the application layer in TCP.

The 40-year-old architect Nien reminded :

TCP/IP is not only the absolute focus of the interview, but also the absolute difficulty of the interview. It is recommended that you have an in-depth and detailed grasp. TCP/IP has a systematic, systematic and comprehensive introduction.

6. Talk about the three-way handshake and four-way wave process of the TCP protocol

The essence of TCP's three-way handshake and four-way wave is the connection and disconnection of TCP communication.

Three-way handshake: In order to track and negotiate the amount of data sent each time, ensure that the sending and receiving of data segments are synchronized, confirm data sending according to the amount of data received, when to cancel the connection after receiving, and establish a virtual connection.

Wave four times: Terminate the TCP connection, which means that when a TCP connection is disconnected, the client and the server need to send a total of 4 packets to confirm the disconnection of the connection.

Sequence diagram of TCP three-way handshake and four-way wave

One, three handshakes

The TCP protocol is located at the transport layer, and its role is to provide reliable byte stream services. In order to accurately deliver data to the destination, the TCP protocol adopts a three-way handshake strategy.

Three-way handshake principle:

The first handshake : the client sends a packet with the SYN (synchronize) flag to the server;

The second handshake : After the server receives it successfully, it returns a packet with the SYN/ACK flag to deliver confirmation information, indicating that I have received it;

The third handshake : the client sends back a data packet with the ACK flag, indicating that I know, and the handshake ends.

Among them: the SYN flag is set to 1, which means that the TCP connection is established; the ACK flag means the verification field.

The three-way handshake can be understood through the following interesting diagram:

Detailed description of the three-way handshake process:

  1. The client sends a request message to establish a TCP connection, which contains a seq sequence number, which is randomly generated by the sender, and sets the SYN field in the message to 1, indicating that a TCP connection needs to be established. (SYN=1, seq=x, x is a randomly generated value);
  2. The server replies to the TCP connection request message sent by the client, which contains the seq sequence number, which is randomly generated by the reply end, and sets SYN to 1, and generates an ACK field, the value of which is sent from the client Add 1 to the sequence number seq to reply, so that when the client receives the information, it knows that its TCP establishment request has been verified. (SYN=1, ACK=x+1, seq=y, y is a randomly generated value) The ack plus 1 here can be understood as confirming who to establish a connection with;
  3. After receiving the TCP establishment verification request sent by the server, the client will add 1 to its serial number, and reply to the ACK verification request again, and add 1 to the seq sent by the server to reply. (SYN=1, ACK=y+1, seq=x+1).

Two or four waves

Since TCP connections are full-duplex, each direction must be closed separately. The principle is that when one party completes its data sending task, it can send a FIN to terminate the connection in this direction. Receiving a FIN only means that there is no data flow in this direction, and a TCP connection can still send data after receiving a FIN. The party that shuts down first will perform an active close, while the other party will perform a passive close.

The principle of four waves:

The first wave : the client sends a FIN to close the data transmission from the client to the server, and the client enters the FIN_WAIT_1 state;

The second wave : After receiving the FIN, the server sends an ACK to the client, confirming that the sequence number is the received sequence number + 1 (same as SYN, one FIN occupies one sequence number), and the server enters the CLOSE_WAIT state;

The third wave : the server sends a FIN to close the data transmission from the server to the client, and the server enters the LAST_ACK state;

The fourth wave : After the client receives the FIN, the client t enters the TIME_WAIT state, and then sends an ACK to the server, confirming that the serial number is the received serial number + 1, and the server enters the CLOSED state, and completes four waved hands.

Among them: the FIN flag is set to 1, which means disconnecting the TCP connection.

The Four Waves can be understood with the following fun illustration:

The four-way wave process is detailed:

  1. The client sends a packet requesting to disconnect the TCP connection, which contains a seq sequence number, which is randomly generated by the sender, and also sets the FIN field in the packet to 1, indicating that the TCP connection needs to be disconnected. (FIN=1, seq=x, x is randomly generated by the client);
  2. The server will reply to the TCP disconnect request message sent by the client, which contains the seq sequence number, which is randomly generated by the reply end, and will generate an ACK field. The value of the ACK field is based on the seq sequence number sent by the client. Add 1 to reply, so that when the client receives the information, it knows that its TCP disconnection request has been verified. (FIN=1, ACK=x+1, seq=y, y is randomly generated by the server);
  3. After the server responds to the client's TCP disconnection request, it will not disconnect the TCP connection immediately. The server will first ensure that all the data transmitted to A has been transmitted before the disconnection. Once the transmission of data is confirmed, it will The FIN field of the reply message will be set to 1, and a random seq sequence number will be generated. (FIN=1, ACK=x+1, seq=z, z is randomly generated by the server);
  4. After receiving the TCP disconnection request from the server, the client will reply to the disconnection request from the server, including a randomly generated seq field and an ACK field. The ACK field will add 1 to the seq of the TCP disconnection request from the server to complete the service. The authentication response requested by the client. (FIN=1, ACK=z+1, seq=h, h is randomly generated by the client)
    At this point, the 4 waved process of TCP disconnection is completed.

Three, state noun analysis

LISTEN:等待从任何远端TCP 和端口的连接请求。
 
SYN_SENT:发送完一个连接请求后等待一个匹配的连接请求。
 
SYN_RECEIVED:发送连接请求并且接收到匹配的连接请求以后等待连接请求确认。
 
ESTABLISHED:表示一个打开的连接,接收到的数据可以被投递给用户。连接的数据传输阶段的正常状态。
 
FIN_WAIT_1:等待远端TCP 的连接终止请求,或者等待之前发送的连接终止请求的确认。
 
FIN_WAIT_2:等待远端TCP 的连接终止请求。
 
CLOSE_WAIT:等待本地用户的连接终止请求。
 
CLOSING:等待远端TCP 的连接终止请求确认。
 
LAST_ACK:等待先前发送给远端TCP 的连接终止请求的确认(包括它字节的连接终止请求的确认)
 
TIME_WAIT:等待足够的时间过去以确保远端TCP 接收到它的连接终止请求的确认。
TIME_WAIT 两个存在的理由:
          1.可靠的实现tcp全双工连接的终止;
          2.允许老的重复分节在网络中消逝。
 
CLOSED:不在连接状态(这是为方便描述假想的状态,实际不存在)

7. Why does TCP establish a connection protocol with a three-way handshake, but close the connection with a four-way handshake? Why can't I connect with two handshakes?

TCP uses a three-way handshake to establish a connection, and uses a four-way handshake to close the connection, mainly to ensure the state synchronization and reliability of the communicating parties. Here is a detailed explanation:

1. Why is a three-way handshake required to establish a connection?

  • The first handshake : the client sends a packet with the SYN (synchronous) flag to the server, requests to establish a connection, and enters the SYN_SENT state.
  • The second handshake : After receiving the client's request, the server replies with a data packet with a SYN/ACK (synchronization/confirmation) flag, indicating that it agrees to establish a connection, and enters the SYN_RCVD state.
  • The third handshake : After receiving the reply from the server, the client sends a data packet with an ACK (confirmation) flag, indicating that the connection is successfully established, and the two parties can start communication, and both the client and the server enter the ESTABLISHED state.

The purpose of the three-way handshake is to ensure that both parties can receive confirmation messages from each other to establish a reliable connection. If there are only two handshakes, then the following scenarios are possible:

  • The client sends a connection request, but due to network delays and other reasons, the request is lost during transmission, and the server cannot know the client's request.
  • After the server receives the client's connection request, it sends a confirmation, but due to network delays and other reasons, the confirmation is lost during transmission, and the client cannot know the server's confirmation.

If there are only two handshakes, neither the client nor the server can confirm whether the other party has received their own request or confirmation in the above cases, so that a reliable connection cannot be established. Therefore, the three-way handshake can ensure that both parties can confirm the establishment of the connection.

2. Why does closing a connection require a four-way handshake?

  • The first handshake : When one party decides to close the connection, it sends a data packet with a FIN (end) flag, indicating that no more data will be sent, but data can still be received and enter the FIN_WAIT_1 state.
  • The second handshake : After the other party receives the FIN, it sends a data packet with the ACK flag as confirmation, indicating that it has received the close request and enters the CLOSE_WAIT state.
  • The third handshake : The other party sends a packet with the FIN flag, agreeing to close the connection and enter the LAST_ACK state.
  • The fourth handshake : After receiving the confirmation, the party requesting to close the connection sends a data packet with the ACK flag, indicating that the connection is closed and enters the TIME_WAIT state.

The purpose of the four-way handshake is to ensure that both parties can complete data transmission and confirmation to avoid data loss or confusion. When closing a connection, both parties need to exchange confirmation messages to ensure that the other party knows that the connection is closed and no more data will be sent.

If there is only a three-way handshake, the following situations may occur:

  • One party sent a close request, but the other party did not receive it, causing the connection to remain in a half-closed state.
  • After one party sends a close request, the other party directly closes the connection without waiting for the data transmission to complete, resulting in data loss.

Through the four-way handshake, it can ensure that both parties can complete the data transmission and confirmation, so as to safely close the connection.

It should be noted that the last handshake (fourth) in the four-way handshake is to ensure the reliable closing of the connection, and to wait for the arrival of potentially delayed data packets for a period of time after closing to prevent problems in connection multiplexing .

Nien, a 40-year-old architect, reminded : TCP/IP is not only the absolute focus of the interview, but also the absolute difficulty of the interview.

It is recommended that you have an in-depth and detailed grasp. For specific content, please refer to the PDF of "Nin Java Interview Collection - Topic 10: TCP/IP Protocol". This topic has a systematic, systematic and comprehensive introduction to TCP/IP .

8. In the http request header, the meaning of expire and cache-control fields, talk about the HTTP status code

In HTTP request headers, Expiresand Cache-Controlfields are used to control the behavior of the cache.

Expiresfield specifies an absolute expiration time after which the cached copy will be considered stale. When the server returns a response, it will include Expiresa field in the response header to inform the client of the validity period of the cache. After the client receives the response, it caches the response and uses the cached copy until the expiration time. However, Expiresthere are some problems with fields, such as the server and client's clocks being out of sync which can lead to cache invalidation.

To solve Expiresthe problem of fields, fields were introduced Cache-Control. Cache-ControlFields provide a more flexible and reliable cache control mechanism. It can contain multiple directives, separated by commas, each with specific meaning and parameters. Common commands include:

  • max-age=<seconds>: Specifies the maximum validity period of the cache, in seconds.
  • no-cache: Indicates that the cached copy needs to be revalidated and cannot be used directly.
  • no-store: Indicates that no copy is cached, and resources need to be reacquired for each request.
  • public: Indicates that the response can be stored by any cache.
  • private: Indicates that the response can only be cached by a single user, usually for private data.

The HTTP status code is used to indicate the response status of the server to the request. Common HTTP status codes include:

  • 1xx: Informational status code, indicating that the request has been received and processing continues.
  • 2xx: Success status code, indicating that the request has been successfully received, understood, and processed.
  • 3xx: Redirection status code, indicating that further action is required to complete the request.
  • 4xx: Client error status code, indicating that the request contains errors or cannot be completed.
  • 5xx: Server error status code, indicating that an error occurred when the server processed the request.

Some common status codes include:

  • 200 OK: The request was successful, and the server successfully processed the request.
  • 301 Moved Permanently: Permanently redirected, the requested resource has been permanently moved to a new location.
  • 400 Bad Request: The client request has a syntax error that the server cannot understand.
  • 404 Not Found: The requested resource does not exist.
  • 500 Internal Server Error: The server has an internal error and cannot complete the request.

Status codes provide a standardized way to enable clients and servers to accurately understand the processing results of requests and take appropriate actions.

9. Tell me why Redis is fast

The reason why Redis is fast is mainly due to the following reasons:

  1. In-memory storage : Redis stores data in memory, not on disk, which makes it fast to read and write data. Compared with traditional disk storage databases, such as MySQL, Redis can provide lower access latency.
  2. Single-threaded model : Redis uses a single-threaded model to process client requests. Although this may sound like a performance bottleneck, in fact, this design enables Redis to avoid the overhead of lock competition and context switching between multiple threads. In addition, the single-threaded model also simplifies the implementation and maintenance of Redis.
  3. Efficient data structure : Redis supports a variety of data structures, such as strings, hash tables, lists, sets, and ordered sets. These data structures are highly optimized when storing and manipulating data, enabling Redis to efficiently perform various operations, such as reading, writing, updating, and deleting.
  4. Asynchronous operation : Redis supports asynchronous operation, that is, the client can hand over some time-consuming operations to the Redis background thread for processing without waiting for the operation to complete. This allows Redis to better handle concurrent requests and high load situations.
  5. Network model : Redis uses an event-driven network model to process network requests through non-blocking I/O and event notification mechanisms. This model enables Redis to efficiently handle a large number of concurrent connections and requests.

In general, Redis achieves high performance and low latency through various technical means such as memory storage, single-threaded model, efficient data structure, asynchronous operation and optimized network model. This makes Redis a fast and reliable key-value store and cache database.

Nien, a 40-year-old architect, reminded : Redis is not only the absolute focus of the interview, but also the absolute difficulty of the interview.

It is recommended that you have an in-depth and detailed grasp. For specific content, please refer to the "Nin Java Interview Collection - Topic 14: Redis Interview Questions" PDF. This topic has a systematic, systematic, and comprehensive introduction to Redis.

If you want to write Redis high concurrency practice into your resume, you can ask Nien for guidance.

10. Redis has several persistence methods

Redis provides two main persistence methods: RDB (Redis Database) and AOF (Append-Only File).

1. RDB persistence:

RDB persistence is to save Redis data to disk in the form of binary files. It does so by periodically performing snapshot operations, which can be triggered manually or automatically based on configured rules. In the process of RDB persistence, Redis will save the data snapshot in the current memory to an RDB file, and then write the file to disk. When Redis restarts, the data can be recovered by loading the RDB file.

The advantage of RDB persistence is that it is fast and compact because it is done by directly writing memory data to disk without performing additional I/O operations. It is suitable for data backup and disaster recovery.

2. AOF persistence:

AOF persistence is achieved by appending Redis write commands to a log file. Redis will append each write command to the end of the AOF file to record data changes. When Redis restarts, the commands in the AOF file will be re-executed to restore the data.

The advantage of AOF persistence is better data persistence, because it records every write operation, which can ensure data integrity. Also, AOF files are saved in text format, which is easy to read and understand.

There are two modes of AOF persistence: default mode and rewrite mode. In the default mode, Redis will append the write command to the end of the AOF file; in the rewrite mode, Redis will generate a new AOF file based on the data in the current memory to replace the old AOF file. The rewrite mode can reduce the size of the AOF file and improve the reading efficiency.

In general, RDB persistence is suitable for fast backup and recovery of data, while AOF persistence is suitable for scenarios with high data persistence and integrity requirements. You can choose the appropriate persistence method according to actual needs, or use both methods at the same time to provide better data protection and recovery capabilities.

11. What should I do if Redis hangs up?

When Redis hangs, you can take the following measures to solve the problem:

  1. Check the Redis process : First, make sure that the Redis process is indeed down. You can check the status of the Redis process using the command line or an administrative tool.
  2. Check the log file : If Redis hangs, you can check the Redis log file, usually the redis-server.log file, to get more error messages and exceptions. Log files can help you understand why Redis hangs.
  3. Restart Redis : If Redis hangs, you can try to restart Redis. Use the command line or management tools to start the Redis server process.
  4. Restoring data : If Redis hangs and causes data loss, you can try to restore the data from backup. If you use Redis's persistence mechanism (such as RDB or AOF), you can restore the backup file to the Redis server.
  5. Check server resources : If Redis hangs frequently, you need to check the resource usage of the server, including memory, CPU, disk, etc. Make sure the server has enough resources to support the normal operation of Redis.
  6. Optimize configuration : According to your needs and server resource conditions, you can consider optimizing the Redis configuration file. For example, increase the maximum memory limit, adjust the persistence mechanism, adjust network parameters, etc.
  7. Monitoring and early warning : In order to prevent Redis from hanging, you can use monitoring tools to monitor the running status of Redis in real time, and set up an early warning mechanism to detect and solve potential problems in time.
  8. Seek professional support : If you cannot solve the problem that Redis hangs, you can seek professional technical support or consultation, and they can help you analyze and solve the problem.

Please note that the above steps only provide a general solution, and the specific operation steps may be different according to your environment and situation. When dealing with the problem of Redis hanging, it is recommended to refer to the official Redis documentation and related technical resources for more accurate and detailed guidance.

12. In the case of multi-threading, how to ensure thread safety?

1. Thread safety level

The issue of "thread safety" has been mentioned in previous blogs. Generally, we often say that a certain class is thread-safe, and a certain class is not thread-safe. In fact, thread safety is not a "black and white" single-choice question. According to the security degree of "thread safety" from strong to weak, we can divide the data shared by various operations in the java language into the following five categories: immutable, absolute thread safety, relative thread safety, thread compatibility and thread opposition.

1. Immutable

In the Java language, immutable objects must be thread-safe, and neither the method implementation of the object nor the caller of the method needs to take any thread safety measures. For example, the data modified by the final keyword cannot be modified and has the highest reliability.

2. Absolute thread safety

Absolute thread safety fully satisfies the definition of thread safety given by Brian GoetZ. This definition is actually very strict. It usually takes a lot of effort for a class to achieve "regardless of the runtime environment, the caller does not need any additional synchronization measures." Big price.

3. Relatively thread safe

Relative thread safety means that a class is "thread safe" in the usual sense.
It needs to ensure that the individual operations on this object are thread-safe. We do not need to take additional safeguards when calling, but for some continuous calls in a specific order, it may be necessary to use additional synchronization methods on the calling end to ensure the calling correctness.
In the Java language, most thread-safe classes are relatively thread-safe, such as the collection guaranteed by the synchronizedCollection() method of Vector, HashTable, and Collections.

4. Thread compatibility

Thread compatibility is what we usually call a class that is not thread-safe.
Thread compatibility means that the object itself is not thread-safe, but you can ensure that the object can be safely used in a concurrent environment by using synchronization methods correctly on the calling side. Most classes in the Java API are thread-compatible. Such as the collection classes ArrayList and HashMap corresponding to the previous Vector and HashTable.

5. Thread opposition

Thread opposition refers to code that cannot be used concurrently in a multi-threaded environment, regardless of whether the caller has taken a synchronization error. Since the Java language is inherently multi-threaded, codes that oppose multi-threading rarely appear.
An example of thread opposition is the suend() and resume() methods of the Thread class. If there are two threads holding a thread object at the same time, one tries to interrupt the thread, and the other tries to resume the thread, if it is performed concurrently, no matter whether the call is synchronized or not, the target thread has a risk of deadlock. As such, these two methods have been deprecated.

Second, the implementation method of thread safety

Ensuring thread safety is classified according to whether synchronization means are required, and can be divided into synchronization schemes and no-synchronization schemes.

1. Mutex synchronization

Mutex synchronization is the most common means of ensuring concurrency correctness. Synchronization means that when multiple threads access shared data concurrently, it is guaranteed that the shared data is only used by one thread at the same time (at the same time, only one thread is operating the shared data). Mutual exclusion is a means to achieve synchronization, and critical sections, mutexes, and semaphores are the main ways to implement mutual exclusion. Therefore, in these four words, mutual exclusion is the cause, and synchronization is the effect; mutual exclusion is the method, and synchronization is the purpose.
In java, the most basic mutual exclusion synchronization method is the synchronized keyword. After the synchronized keyword is compiled, two bytecodes, monitorenter and monitorexit, will be formed before and after the synchronization block. These two bytecode instructions need A parameter of type reference to specify the object to lock and unlock.
In addition, ReentrantLock also achieves synchronization through mutual exclusion. In basic usage, ReentrantLock is very similar to synchronized, they both have the same thread reentrant feature.
The main problem of mutual exclusion synchronization is the performance problem caused by thread blocking and waking up, so this kind of synchronization is also called blocking synchronization. From the way of dealing with the problem, mutual exclusion synchronization is a pessimistic concurrency strategy. It is always believed that as long as the correct synchronization measures (such as locking) are not taken, then there will definitely be problems, no matter whether the shared data is really shared or not. If there is competition, it must be locked.

2. Non-blocking synchronization

With the development of hardware instruction sets, an optimistic concurrency strategy based on conflict detection has emerged. In layman's terms, the operation is performed first. If no other threads compete for the shared data, the operation is successful; if there is contention for the shared data, a If there is a conflict, then use other compensatory measures. (The most common compensation error is to keep retrying until it succeeds). Many implementations of this optimistic concurrency strategy do not need to suspend threads, so this synchronization operation is called non-blocking synchronization.

Non-blocking implementation of CAS (compareandswap): The CAS instruction needs to have 3 operands, which are the memory address (understood as the memory address of the variable in java, denoted by V), the old expected value (denoted by A) and the new value (indicated by B). When the CAS instruction is executed, if and only if the value at V meets the old expected value A, the processor updates the value at V with B, otherwise it does not perform the update, but regardless of whether the value at V is updated , will return the old value of V, and the above processing is an atomic operation.

Disadvantages of CAS:

  1. ABA problem: Because CAS needs to check whether the value has changed when operating the value, if there is no change, it will be updated, but a value was originally A, changed to B, and then changed to A, then when using CAS to check You will find that its value has not changed, but it has actually changed.
  2. The solution to the ABA problem is to use the version number. Add the version number in front of the variable, and add one to the version number every time the variable is updated, then ABA becomes 1A-2B-3C. The atomic package of JDK provides a class AtomicStampedReference to solve the ABA problem. The function of the compareAndSet method of this class is to first check whether the current reference is equal to the expected reference, and whether the current flag is equal to the expected flag, and if all are equal, set the value of the reference and the flag to the given update value in an atomic way.
3. No synchronization scheme required

To ensure thread safety, it is not necessary to synchronize, and there is no causal relationship between the two. Synchronization is only a means to ensure the correctness of shared data contention. If a method does not involve shared data, it naturally does not need any synchronization operations to ensure correctness, so some code is inherently thread-safe.

1) Reentrant code

Reentrant code (ReentrantCode), also known as pure code (Pure Code), can interrupt it at any moment of code execution, and then execute another piece of code, and after the control returns, the original program will not have any errors . All reentrant code is thread-safe, but not all thread-safe code is reentrant.
The characteristics of reentrant code are that it does not depend on the data stored on the heap and common system resources, the state quantities used are all passed in by parameters, and non-reentrant methods are not called.
(Analog: synchronized has the function of lock reentry, that is, when using synchronized, when a thread obtains an object lock, it can obtain the lock of the object again when it requests the object lock again)

2) Thread local storage

If the data required in a piece of code must be shared with other code, then see if the code that shares data can be guaranteed to execute in the same thread? If it can be guaranteed, we can limit the visibility of shared data to the same thread. In this way, there is no need for synchronization to ensure that there is no data contention problem between threads.
Applications that meet this characteristic are not uncommon. Most architectural patterns that use consumption queues (such as the "producer-consumer" pattern) will consume the product consumption process in one thread as much as possible. One of the most important application examples is the processing method of "one request corresponds to one server thread (Thread-per-Request)" in the classic Web interaction model. The wide application of this processing method enables many Web server applications to use threads. Local storage to solve thread safety issues.

13. Have you ever used volatile? How does it ensure visibility, and what is the principle?

In Java, volatilekeywords are used to modify variables to ensure the visibility and order of read and write operations on the variable.

volatileThe principle of the keyword is to directly read and write the value of the variable from the main memory by prohibiting the cache operation of the modified variable by the thread. When a thread modifies volatilethe value of a variable, it immediately flushes the new value to main memory instead of just updating its own local cache. When other threads read the variable, they get the latest value from main memory instead of using their own cache.

This mechanism ensures volatilethe visibility of the variable, that is, the modification of the variable by one thread is visible to other threads. When a thread modifies volatilethe value of a variable, other threads will see the latest value when they read the variable.

In addition, volatilekeywords can also ensure the order of operations. Specifically, volatileall operations preceding the write to the variable complete before the write, and all operations after the write begin after the write. This ensures that volatileoperations on variables are performed in the expected order.

It should be noted that volatilekeywords can only guarantee the visibility and order of a single variable, but cannot guarantee atomic operations between multiple variables. If you need to ensure the atomic operation of multiple variables, you can consider using synchronizedkeywords or java.util.concurrentatomic classes in the package.

14. Talk about your understanding of the role and principle of the volatile keyword

The two functions of the voliate keyword :

1. Ensure the visibility of variables : When a variable modified by the volatile keyword is modified by a thread, other threads can immediately get the modified result. When a thread writes data to a variable modified by the volatile keyword, the virtual machine forces it to be flushed to main memory. When a thread uses a value modified by the volatile keyword, the virtual machine forces it to read from main memory.

2. Shield instruction reordering : Instruction reordering is a means for compilers and processors to optimize programs efficiently. It can only ensure that the results of program execution are correct, but it cannot guarantee that the operation sequence of the program is consistent with the code sequence. This doesn't pose a problem in a single thread, but it does in a multithreaded one. A very classic example is to add voliate to the field at the same time in the singleton method, just to prevent the reordering of instructions.

1. Visibility

To put it simply : When a shared variable is modified by volatile, it will ensure that the modified value will be updated to the main memory immediately, and when other threads need to read it, it will go to the memory to read the new value.

MESI protocol : In the early CPUs, it was implemented by adding LOCK# locks to the bus , but this method was too expensive, so Intel developed the cache coherence protocol, which is the MESI protocol.

Cache consistency idea : When the CPU writes data, if it finds that the variable being operated is a shared variable, that is, the variable also exists in the working memory of other threads, it will send a signal to notify other CPUs that the memory address of the variable is invalid. When other threads need to use this variable, if the memory address becomes invalid, they will re-read the value in main memory.

Detailed process :

flow chart:

Bus sniffing mechanism :

The sniffing mechanism is actually a listener. Going back to our previous process, if it is after adding the MESI cache consistency protocol and the bus sniffing mechanism:

(1) CPU1 reads data a=1, and there is a copy of data a in CPU1's cache, and the cache line is set to (E) state

(2) CPU2 also executes the read operation. Similarly, CPU2 also has a copy of the data a=1. At this time, the bus sniffs out that CPU1 also has the data, and both cache lines of CPU1 and CPU2 are set to (S) state

(3) CPU1 modifies the data a=2, CPU1's cache and main memory a=2, at the same time, the cache line of CPU1 is set to (S) state, the bus sends a notification, and the cache line of CPU2 is set to (I) state

(4) CPU2 reads a again. Although CPU2 hits the data a=1 in the cache, it finds that the status is (I), so it directly discards the data and goes to the main memory to obtain the latest data.

2. Prohibition of reordering

Volatile prohibits reordering by using memory barriers to ensure orderliness.
A memory barrier is a set of CPU instructions used to implement sequential restrictions on memory operations.
The Java compiler, when generating a series of instructions, inserts memory barriers at appropriate positions to prohibit the processor from reordering instructions.

(1) volatile will add two memory barriers before and after the variable write operation to ensure that the previous write instructions and the subsequent read instructions are in order.

(2) volatile inserts two instructions after the read operation of the variable , and prohibits the reordering of the subsequent read instructions and write instructions.

Summarize

Volatile can actually be regarded as lightweight synchronized. Although volatile cannot guarantee atomicity , if the operation itself is atomic under multi-threading (such as assignment operation), then using volatile will be due to synchronized.

Volatile can be applied to a certain flag , once it is modified, it needs to be immediately visible to other threads. It is also possible to modify the variable used as a trigger . Once the variable is modified by any thread, it will trigger the execution of an operation.

The most suitable scenario for volatile is that a thread modifies a variable modified by volatile, and other threads obtain the value of this variable.
When multiple threads modify the value of a variable concurrently, synchronized must be used for mutual exclusion synchronization.

The performance of volatile :
If a variable is modified with volatile, then every time the variable is read and written, the CPU needs to read from the main memory, and the performance will definitely be affected to a certain extent.
That is to say: volatile variables are far away from the CPU Cache, so they are not so efficient.

15. Let’s talk about sub-database and sub-table. Why do sub-tables need to stop service? What can I do if I don’t stop service?

1. Sub-database and sub-table

Sub-database and sub-table is a strategy of horizontal database splitting, which is used to solve the performance bottleneck problem of a single database when the data volume increases or the access pressure increases. It splits a database into multiple sub-databases (sub-databases), and further splits the tables in each sub-database into multiple sub-tables (sub-tables), so as to realize decentralized storage of data and distribution of query load.

2. Why should sub-meters be discontinued?

It is usually necessary to stop the server when performing meter splitting operations. There are two main reasons:

  1. Data migration : Table splitting involves migrating the original table data to a new split table. This process requires copying data from the original table to the new table, and may require data conversion and reallocation. In this process, in order to ensure the consistency and integrity of the data, it is necessary to stop the write operation to the database to avoid data loss or inconsistency.
  2. Changes in database structure : Table splitting operations usually require modification of the database structure, including creating new split tables, adjusting indexes, and updating foreign key relationships. These operations may cause the metadata of the database to change, and during the change process, the database may not be able to process query requests normally, so the service needs to be stopped.

3. What to do if you don’t stop taking it

If you do not want to stop the server and perform table splitting operations, you can consider the following methods:

  1. Gradual migration : You can create a new sub-table first, and write new data into the new table while maintaining the read operation on the old table. Over time, the data in the new table will gradually increase, while the data in the old table will gradually decrease. When there is enough data in the new table, the data in the old table can be migrated to the new table, and finally the read operation on the old table will be stopped.
  2. Data replication : The data of the original table can be copied to the new table without stopping the server, and the data consistency of the two tables can be maintained through data synchronization. During replication, care needs to be taken to handle concurrent writes to avoid data conflicts.
  3. Offload processing : The offload processing of requests can be realized by introducing a middleware or proxy layer. During the table splitting process, some requests can be routed to the new table while other requests are still routed to the old table to achieve gradual table splitting transition.

It should be noted that sub-table operation without stopping the server may increase the complexity and risk of the system, which requires careful evaluation and testing. Before splitting the table, you should back up the data, make a detailed plan, and ensure that there are sufficient testing and rollback strategies.

16. In the Java virtual machine, what types of data types can be divided into?

In the Java virtual machine, data types can be divided into the following categories:

  1. Basic data types (Primitive Types) : Java provides 8 basic data types, namely boolean, , byte, short, int, long, floatand double. charThese types directly store the value of data, rather than a reference to an object.
  2. Reference data types (Reference Types) : Reference data types refer to all types except basic data types, including classes (Class), interfaces (Interface), arrays (Array), and enumerations (Enum). A reference data type stores a reference to an object rather than the value of the object itself.
  3. Array Types (Array Types) : Arrays are a special reference data type that can store multiple elements of the same type. Arrays can be one-dimensional, two-dimensional, or even multidimensional.
  4. Class Types (Class Types) : Class types refer to classclasses defined by keywords. A class is a template for an object, describing its properties and behavior.
  5. Interface Types : Interface types refer to interfaceinterfaces defined by keywords. An interface defines the specification of a set of methods that a class that implements the interface must implement.
  6. Enumeration Types (Enum Types) : Enumeration types refer to enumenumeration classes defined by keywords. An enumeration class represents a set of named constants, which can have their own methods and properties.
  7. Annotation Types : Annotation types refer to @interfaceannotations defined by keywords. Annotations are used to add additional metadata to program elements (classes, methods, variables, etc.).

These data types have corresponding memory representation and operation methods in the Java virtual machine, and developers can choose the appropriate data type to store and operate data according to their needs.

17. How to understand stack and heap? What is stored in the heap? What is stored in the stack?

In computer science, the stack (Stack) and the heap (Heap) are two important memory areas used to store data when the program is running. They have different characteristics and uses.

1. Stack (Stack):

  • The stack is a linear data structure that adopts the principle of last-in-first-out (LIFO), that is, the data entered last is accessed first.
  • The stack stores local variables, method parameters, method calls, and return status.
  • The size of the stack is determined during the program compilation stage, and is automatically allocated and released by the compiler. The operation speed of the stack is fast, and the allocation and recovery of memory are very efficient.
  • The size and life cycle of the data in the stack are determined. When a method is executed, the data in the stack will be released immediately.

Second, the heap (Heap):

  • The heap is a way of dynamically allocating memory for storing objects and data structures.
  • newObjects, arrays, and other dynamically allocated data created by keywords are stored in the heap .
  • The size of the heap can be adjusted dynamically, and the garbage collector is responsible for managing the allocation and release of memory.
  • The size and life cycle of data in the heap are uncertain. The life cycle of an object is determined by the logic of the program. When there is no reference to an object, the object will be reclaimed by the garbage collector.

Summarize:

  • The stack is used to store method call and return information, as well as data such as local variables, and its size and life cycle are determined.
  • The heap is used to store dynamically allocated objects and data, with an indeterminate size and lifetime.

It should be noted that the value of the basic data type in Java can be directly stored on the stack, while the object of the reference data type is stored on the heap, and the reference of the object is stored in the stack.

18. Why should the heap and the stack be distinguished? Isn't it possible to store data in the stack?

The purpose of distinguishing the heap from the stack is to better manage memory and support program execution.

First of all, the stack is used to store method call and return information, as well as data such as local variables. The stack is characterized by fast allocation and release speed, and operates according to the principle of "first in, last out". When a method is called, the stack allocates a memory space for the method, called a stack frame. The stack frame contains information such as method parameters, local variables, and return addresses. When the method finishes executing, its corresponding stack frame will be released for use by other methods. Because the stack management method is simple and efficient, it is suitable for storing small data and temporary variables.

Whereas the heap is used to store dynamically allocated objects and data. The heap is characterized by flexible allocation and release methods, and can dynamically adjust the size of the memory space. Objects allocated in the heap can be shared and accessed in different parts of the program. Because the heap management method is relatively complicated, it needs to consider the life cycle of objects, garbage collection and other issues, so it is suitable for storing large data and long-term objects.

In summary, the distinction between stack and heap is mainly to meet the storage requirements of different types of data and the needs of memory management. The stack is suitable for storing temporary data such as method calls and local variables, while the heap is suitable for storing dynamically allocated objects and long-lived data.

19. Why not put the basic types in the heap?

Placing primitive types on the heap incurs additional memory overhead and performance penalty. Here are a few reasons:

  1. Memory overhead : Placing primitive types on the heap causes each variable to require additional memory space to store object headers and other management information. In contrast, storing primitive types on the stack requires only a fixed-size memory allocation, with no additional overhead.
  2. Access speed : The stack access speed is faster than the heap. Because the data in the stack is operated according to the principle of "first in, last out", the data in the stack can be accessed and released through simple pointer operations. The data in the heap needs to be accessed by reference, requiring additional addressing and dereferencing operations, so the access speed is relatively slow.
  3. Management complexity : Placing primitive types on the heap adds complexity to memory management. Objects in the heap need to perform operations such as garbage collection and memory release, while basic types stored in the stack do not need to perform these operations, making memory management simpler and more efficient.
  4. Passing by value : Basic types are passed by value when stored on the stack, while reference types are passed by reference when stored on the heap. Passing by value can avoid sharing and side effects of objects, while passing by reference can realize sharing and passing of objects.

To sum up, placing basic types on the stack can reduce memory overhead, improve access speed, and simplify memory management, so basic types are usually stored on the stack. Storing reference types in the heap enables dynamic allocation and sharing of objects.

20. What about passing values ​​when passing parameters in Java? Or pass by reference?

In Java, parameter passing is done by value passing (pass by value). This means that when the method is called, a copy of the value of the parameter is actually passed to the method, not the parameter itself.

When passing primitive types such as int, float, boolean, etc., you are actually passing a copy of the value to the method. Modifying the parameter inside the method will not affect the original value, because it is only an operation on the copy.

When passing a reference type (such as object, array, etc.), you are actually passing a copy of the reference to the method. A reference itself is an address value that points to where the actual object is stored on the heap. Modifying the reference inside the method will not affect the original reference, but the properties and state of the object can be accessed and modified through the reference.

It should be noted that although a copy of the reference is passed when passing a reference type, the copy still points to the same object. Therefore, modifications to the object inside the method will affect the state of the original object.

Also, strings in Java are immutable objects, and when you pass a string, you actually pass a copy of the string to the method. When a string is modified inside a method, a new string object is created without modifying the original string object.

To sum up, parameter passing in Java is by value, whether it is a primitive type or a reference type. For primitive types, a copy of the value is passed; for reference types, a copy of the reference is passed, but still pointing to the same object.

21. Is there a concept of pointers in Java?

In Java, the concept of pointers exists, but pointers in Java are hidden at the bottom, and direct access and manipulation of pointers is not allowed. Instead, Java uses references to implement operations on objects.

In Java, a reference is a variable pointing to an object, which stores the address of the object in memory. Through references, we can access and manipulate objects indirectly. Unlike pointers, Java references do not allow pointer arithmetic and cannot directly access the memory address of an object.

Java's reference has the feature of automatic memory management, that is, the garbage collection mechanism. In Java, when an object is no longer referenced, the garbage collection mechanism will automatically reclaim the memory space occupied by the object, thus avoiding the problems of memory leaks and dangling pointers.

Therefore, although there is no concept of directly manipulating pointers in Java, Java realizes indirect manipulation of objects through reference, and provides an automatic memory management mechanism, making programming more convenient and safe for developers.

22. In Java, what parameters are used to set the size of the stack?

In Java, the size of the stack can be set through virtual machine parameters. Specifically, the following parameters can be used to adjust the size of the stack:

-Xss: This parameter is used to set the size of the thread stack.

For example, -Xss1mit means to set the size of the thread stack to 1MB.

Please note that the size of the stack is set independently for each thread, so by adjusting the size of the thread stack, you can control the stack space occupied by each thread.

It should be noted that if the stack size is set too small, it may cause stack overflow, and if it is set too large, it may take up too much memory resources. Therefore, you need to be cautious when adjusting the size of the stack, and make reasonable settings according to specific application scenarios and requirements.

In addition, the size of the stack is also limited by the operating system and hardware, beyond a certain limit, it will not be possible to set a larger stack space. Therefore, when setting the size of the stack, it is necessary to consider the limitations of the system and make appropriate adjustments.

23. How much space does an empty Object occupy?

ObjectIn Java, the space occupied by an empty object is mainly determined by the space occupied by the object header and alignment padding.

The object header contains some metadata information, such as the object's hash code, lock status, GC flag, etc. On a 64-bit JVM, the size of the object header is usually 12 bytes.

In addition, due to the JVM's memory allocation and alignment requirements, the size of the object must be byte-aligned. In most cases, Java objects are automatically resized to multiples of 8 bytes.

Therefore, an empty Objectobject typically has a footprint of 16 bytes on a 64-bit JVM. This includes 12 bytes for the object header and 4 bytes for alignment padding.

It should be noted that different JVM implementations may vary, so the size of the actual empty object may vary. Also, if Objectinstance variables are added to the object, the size of the empty object will grow, depending on the number and type of instance variables added.

24. What are the types of object references?

In Java, object reference types can be divided into the following categories:

  1. Strong Reference : A strong reference is the most common type of reference. When an object is strongly referenced, it will not be reclaimed by the garbage collector even if it runs out of memory. Only when the object does not have any strong references will it be judged as a recyclable object.
  2. Soft Reference (Soft Reference) : Soft references are used to describe some useful but not necessary objects. When memory is low, the garbage collector may reclaim soft-referenced objects. In Java, you can use SoftReferenceclasses to create soft references.
  3. Weak Reference (Weak Reference) : Weak references are weaker than soft references and are used to describe non-essential objects. When the garbage collector runs, weakly referenced objects are reclaimed regardless of whether there is sufficient memory. In Java, WeakReferenceclasses can be used to create weak references.
  4. Phantom Reference : Phantom Reference is the weakest type of reference, with almost no direct access value. The main function of phantom references is to track the state of objects being reclaimed by the garbage collector. In Java, PhantomReferenceclasses can be used to create phantom references.

The main difference between these reference types is how the garbage collector treats them differently. Strong references will not be recycled when memory is insufficient, while soft references, weak references, and phantom references may be recycled when memory is insufficient.

25. Talk about the garbage collection algorithm

The garbage collection algorithm is the core part of automatic memory management. It is responsible for automatically reclaiming unused objects at runtime and releasing the memory space they occupy. Garbage collection algorithms mainly include the following:

  1. Reference Counting Algorithm (Reference Counting) : This algorithm records the number of times an object is referenced by maintaining a reference counter for each object. When the reference counter is 0, it means that the object is no longer referenced and can be recycled. However, the reference counting algorithm cannot solve the problem of circular references, that is, two or more objects refer to each other, but are not referenced by other objects. Therefore, the reference counting algorithm is not used as the main garbage collection algorithm in Java.
  2. Mark and Sweep : Mark and Sweep is one of the most commonly used garbage collection algorithms in Java. It is divided into two phases: marking phase and clearing phase. In the marking phase, the garbage collector traverses all reachable objects starting from the root object and marks them. During the cleanup phase, the garbage collector removes unmarked objects and reclaims the memory space they occupy. The mark-sweep algorithm can effectively handle the situation of circular references, but it will produce memory fragmentation.
  3. Copying algorithm (Copying) : The copying algorithm divides the available memory space into two areas, usually called "From" space and "To" space. During garbage collection, all surviving objects are first copied from the "From" space to the "To" space, and then all objects in the "From" space are cleared. The copy algorithm is simple and efficient, and will not generate memory fragmentation, but it will waste part of the memory space.
  4. Mark-compression algorithm (Mark and Compact) : The mark-compression algorithm is a garbage collection algorithm that combines the mark-sweep algorithm and the copy algorithm. It first marks all surviving objects through the marking phase, then moves them to one end, compacts them, and finally clears the memory space outside the boundary. The mark-compression algorithm can both handle circular references and reduce memory fragmentation.

The garbage collector usually selects the appropriate garbage collection algorithm according to different situations. For example, the young generation uses a copy algorithm, and the old generation uses a mark-sweep or mark-compact algorithm. In addition, Java also provides different garbage collector implementations, such as Serial, Parallel, CMS, G1, etc., each garbage collector has different characteristics and applicable scenarios.

26. How to solve the problem of memory fragmentation?

Memory fragmentation means that when using dynamic memory allocation, the memory space is divided into multiple small blocks, and there are some unused gaps between these small blocks. The existence of memory fragmentation may lead to lower memory utilization, and even cause memory allocation failure.

In order to solve the problem of memory fragmentation, the following methods can be adopted:

  1. Memory pool : Using memory pool technology can avoid frequent memory allocation and release operations, thereby reducing memory fragmentation. The memory pool pre-allocates a contiguous memory space when the program starts, and divides it into multiple fixed-size memory blocks. When memory is needed, an available memory block is obtained directly from the memory pool instead of dynamic memory allocation each time. This can reduce fragmentation and improve memory utilization.
  2. Memory sorting : Memory sorting refers to sorting out scattered memory blocks so that free memory blocks are arranged continuously. Allocated blocks of memory can be moved together by means of a memory copy, thereby eliminating fragmentation. This process needs to suspend the execution of the program, so it is generally performed during idle time.
  3. Allocation algorithm optimization : The memory allocation algorithm can also optimize memory fragmentation. Common algorithms include first-fit, best-fit, and worst-fit algorithms. The first-fit algorithm selects the first free block that meets the requirements for allocation, the best-fit algorithm selects the smallest free block that meets the requirements for allocation, and the worst-fit algorithm selects the largest free block that meets the requirements for allocation. Different algorithms have different effects on memory fragmentation, and an appropriate algorithm can be selected according to the actual situation.
  4. Compression algorithm : The compression algorithm is a method of compressing allocated memory blocks so that free memory blocks are arranged continuously. By moving blocks of memory towards one end, free blocks can be merged together, reducing fragmentation. This process needs to suspend the execution of the program, so it is generally performed during idle time.

The above are some common methods to solve the problem of memory fragmentation, and the specific choice can be determined according to the actual situation and needs.

27. Talk about the underlying principles of JVM and troubleshooting commands

JVM (Java Virtual Machine) is the abbreviation of Java Virtual Machine, which is the basic platform for running Java programs. It is a virtual computer that simulates running Java bytecode on a physical machine and is responsible for interpreting and executing Java programs.

The underlying principle of JVM:

  1. Class loading : The JVM compiles the Java source code into a bytecode file, and then loads the bytecode file into memory through a class loader. Class loaders are responsible for finding, loading, and validating class files.
  2. Memory management : JVM uses memory manager to manage memory, including heap, stack and method area. The heap is used to store object instances, the stack is used to store local variables and method calls, and the method area is used to store class information and constant pools.
  3. Garbage collection : JVM automatically manages memory through garbage collection mechanism. It periodically checks for objects that are no longer referenced and reclaims the memory space they occupy.
  4. Just-in-time compilation : JVM uses a just-in-time compiler to convert bytecode into local machine code to improve program execution efficiency.
  5. Exception handling : JVM provides an exception handling mechanism that can catch and handle exceptions in the program.

JVM troubleshooting command:

  1. jps: Used to list the currently running Java processes, displaying the process ID and class name.
  2. jstat: Used to monitor information such as JVM memory, garbage collection, and class loading.
  3. jmap: Used to generate a heap dump snapshot and view the heap memory usage.
  4. jstack: Used to generate thread dump snapshots, view thread status and call stack information.
  5. jinfo: Used to view and modify the configuration parameters of the JVM.
  6. jconsole: Graphical tool for monitoring and managing JVM.
  7. jcmd: Used to send diagnostic commands to a running Java process.

These commands help developers monitor and debug Java applications, locate problems and optimize performance. To use these commands, you need to enter the corresponding commands and parameters in the command line. For specific usage methods, please refer to the documentation of each command or use the help command (for example: jps -help).

The 40-year-old architect Nien reminded : JVM is not only the absolute focus of the interview, but also the absolute difficulty of the interview.

It is recommended that you have an in-depth and detailed grasp. For specific content, please refer to the "Nin Java Interview Collection - Topic 01: JVM Interview Questions" PDF. This topic has a systematic, systematic, and comprehensive introduction to JVM.

If you want to write JVM tuning practice into your resume , you can ask Nien for guidance.

28. Talk about the principle of ZK consistency

ZooKeeper is a distributed coordination service that provides a highly available, consistent and reliable data storage and access mechanism. The consistency principle of ZooKeeper is mainly implemented based on the ZAB (ZooKeeper Atomic Broadcast) protocol. The following is a detailed description of the ZooKeeper consistency principle:

  1. Atomic Broadcast : ZooKeeper uses the atomic broadcast protocol to ensure data consistency. This protocol ensures the sequential delivery and one-shot submission of messages across a cluster of ZooKeeper servers. The ZAB protocol is divided into two phases: broadcast and commit.
  2. Leader Election (Leader Election) : The servers in the ZooKeeper cluster select a leader (Leader) through an election mechanism. The leader is responsible for processing read and write requests from clients and broadcasting the results to other servers. During the election process, servers communicate by sending messages to each other, and eventually a server with the highest number is elected as the leader.
  3. Consistency of write operations : When a client sends a write request to the leader, the leader forwards the request to other servers and waits for confirmation from the majority of servers. Once the majority of servers confirm receipt of the write request and write data successfully, the leader will return the write operation result to the client. This ensures the consistency of write operations.
  4. Consistency of read operations : When a client sends a read request to the leader, the leader forwards the request to other servers and waits for responses from most servers. Once the majority of servers return the same result, the leader returns the result to the client. This ensures the consistency of read operations.
  5. 数据同步:ZooKeeper使用了多数派(Majority)的原则来保证数据的一致性。当一个服务器接收到写请求后,它会将数据变更写入本地存储,并将变更广播给其他服务器。其他服务器接收到变更后,会将其应用到本地存储。只有当大多数服务器都完成了数据的变更,才会认为写操作成功。

总之,ZooKeeper通过领导者选举、原子广播协议和多数派原则来保证数据的一致性。这种机制确保了在ZooKeeper集群中的所有服务器上的数据是一致的,并且可以提供高可用性和可靠性的分布式协调服务。

40岁老架构师尼恩提示:ZooKeeper既是面试的绝对重点,也是面试的绝对难点,

建议大家有一个深入和详细的掌握,具体的内容请参见《Java高并发核心编程 卷1加强版:NIO、Netty、Redis、ZooKeeper》PDF,该专题对ZooKeeper有一个系统化、体系化、全面化的介绍。

29、说说Redis数据结构、持久化、哨兵、cluster数据分片规则

Redis是一种内存数据库,它支持多种数据结构和持久化方式,并提供了哨兵和集群功能。下面是对Redis数据结构、持久化、哨兵和集群的详细叙述:

一、数据结构:

  • 字符串(String):最基本的数据结构,可以存储字符串、整数和浮点数。
  • 列表(List):有序的字符串列表,可以在头部或尾部添加、删除元素。
  • 集合(Set):无序的唯一字符串集合,支持集合运算如交集、并集、差集。
  • 哈希(Hash):键值对的无序散列表,适合存储对象。
  • 有序集合(Sorted Set):有序的字符串集合,每个元素关联一个分数,可以按照分数排序。

二、持久化:

  • RDB (Redis Database) persistence : save the data in the memory to the disk in binary format, you can periodically save snapshots through configuration or manually execute the SAVE and BGSAVE commands.
  • AOF (Append-Only File) Persistence : Save the write operation log to the disk in an appended manner. You can configure periodic flushing or manually execute the BGREWRITEAOF command.

3. Sentinel:

  • Sentinel is an independent process used to monitor the status of Redis master and slave nodes.
  • When the master node fails, Sentinel can automatically upgrade a slave node to the master node to ensure high availability.
  • Sentinels are also responsible for monitoring the health status of the master and slave nodes, and performing failover and failback when needed.

4. Cluster (Cluster):

  • Redis cluster is a distributed solution that can disperse and store data on multiple nodes, providing high availability and scalability.
  • The cluster uses hash slots (Hash Slot) to fragment data, and each node is responsible for processing a part of the data in the hash slot.
  • The nodes in the cluster communicate through the Gossip protocol to realize fault detection and information transmission between nodes.
  • The client can access the cluster through the cluster proxy (Cluster Proxy) or the Redis client library, and the cluster proxy will forward the request to the correct node.

In short, Redis provides a variety of data structures and persistence methods, and you can choose a suitable storage method according to different needs. Sentinel and cluster functions can provide high availability and scalability, making Redis more stable and reliable in a distributed environment.

Nien, a 40-year-old architect, reminded : Redis is not only the absolute focus of the interview, but also the absolute difficulty of the interview.

It is recommended that you have an in-depth and detailed grasp. For specific content, please refer to the "Nin Java Interview Collection - Topic 14: Redis Interview Questions" PDF. This topic has a systematic, systematic, and comprehensive introduction to Redis.

If you want to write Redis high concurrency into your resume, you can ask Nien for guidance.

30. The principle of Kafka consistency, how to solve the loss and duplication of messages during consumption?

Kafka is a distributed stream processing platform, one of its design goals is to provide highly reliable message delivery. The consistency principle of Kafka is mainly based on the distributed replication and log submission mechanism.

Kafka achieves high reliability by dividing messages into multiple partitions and performing distributed replication on multiple Brokers. Each partition has a primary Broker and several replica Brokers. The primary Broker is responsible for receiving and writing messages, while the replica Broker is responsible for replicating messages on the primary Broker. When the primary Broker fails, the replica Broker can take over as the new primary Broker to ensure message persistence and availability.

In Kafka, consumers can consume messages through consumer groups. Consumers in each consumer group can consume different partitions in parallel, which can improve the throughput of consumption. Kafka uses offsets to track where consumers are consuming on each partition. Consumers can submit offsets periodically to ensure the reliability of consumption progress.

Regarding the solution to message loss and duplication, Kafka provides the following mechanisms:

  1. Persistent storage : Kafka uses persistent logs to store messages to ensure message reliability. Even in the event of a failure, Kafka can recover messages from the log.
  2. Redundant copy : Kafka copies the messages of each partition to copies on multiple Brokers to ensure that even if a Broker fails, messages can still be obtained from other copies.
  3. Offset submission : Consumers submit offsets regularly to ensure the reliability of consumption progress. If a consumer fails, it can continue to consume from the last committed offset, avoiding repeated consumption of messages.
  4. Exactly Once semantics : Kafka provides support for Exactly Once semantics to ensure that messages are delivered exactly once between producers and consumers. This is achieved through transaction mechanisms and idempotence guarantees.

Through these mechanisms, Kafka can provide highly reliable message delivery and effectively solve the problems of message loss and duplication.

31. Talk about the advantages and disadvantages of microservices

A microservices architecture is an approach to software development that breaks down an application into a set of small, independently deployable services. It has the following advantages and disadvantages:

advantage:

  1. Loose coupling : The microservice architecture splits the application into multiple small services, each of which is independent and can be developed, deployed, and scaled independently. This loosely coupled design enables teams to develop and deploy different services more independently, improving development efficiency and flexibility.
  2. Scalability : Since the services of the microservice architecture are independent, each service can be independently expanded according to demand. This scalability makes it easier to deal with high concurrency and large-scale traffic.
  3. Technology Diversity : The microservice architecture allows each service to use different technology stacks and programming languages ​​because each service is independent. This allows teams to choose the technology that best suits each service need, increasing development flexibility and innovation.
  4. High availability : Each service in the microservice architecture can be deployed independently and scaled horizontally, making the system more available. Even if one service fails, other services can still work normally.

shortcoming:

  1. System complexity : The microservice architecture splits the application into multiple services, resulting in increased system complexity. Issues such as communication, data consistency, and service discovery among multiple services need to be managed and coordinated.
  2. Challenges of Distributed Systems : Each service in a microservice architecture is independent and communicates with each other over a network. This brings distributed system challenges such as network latency, distributed transactions, and data consistency.
  3. Operational complexity : Due to the increase in the number of services in a microservice architecture, operational complexity becomes more complex. Tasks such as deployment, monitoring, logging, and troubleshooting of multiple services need to be managed.
  4. Team Collaboration Cost : Microservice architecture requires teams to have knowledge of distributed systems and service governance. Teams need to coordinate and collaborate to ensure that the development and deployment of each service goes smoothly.

To sum up, the microservice architecture has the advantages of flexibility, scalability, and high availability, but it also needs to deal with the disadvantages of complexity, distributed system challenges, and operation and maintenance complexity. When choosing a microservice architecture, you need to weigh these pros and cons and make decisions on a case-by-case basis.

32. Talk about the underlying implementation of synchronizedlock

Both synchronized and Lock are mechanisms used to implement thread synchronization in Java, and their underlying implementations are different.

1. The underlying implementation of synchronized:

  • Every object in Java has a monitor lock (also known as a built-in lock or object lock), which can be acquired and released through the synchronized keyword.
  • When a thread executes into a synchronized code block, it tries to acquire the object's monitor lock.
  • If the lock is not occupied by other threads, the thread will acquire the lock and continue to execute the contents of the synchronized code block.
  • If the lock is already occupied by another thread, the thread will be blocked until the lock is acquired.
  • When the thread finishes executing the synchronized code block or an exception occurs, the lock will be released.

2. The underlying implementation of Lock:

  • Lock is an interface provided in the java.util.concurrent package, which defines the basic operations of locks.
  • The common implementation class of the Lock interface is ReentrantLock, which uses a synchronizer called AQS (AbstractQueuedSynchronizer) to implement the lock function.
  • AQS is a framework for building locks and synchronizers. It provides a queue to manage threads waiting for locks, and uses CAS (Compare and Swap) operations to implement safe thread switching.
  • When a thread calls Lock's lock() method, it tries to acquire the lock. If the lock is already occupied by another thread, the thread will be blocked until the lock is acquired.
  • Different from synchronized, Lock provides a more flexible way to acquire and release locks, for example, you can set the timeout period for acquiring locks, and you can acquire and release locks separately in different code blocks.

In general, both synchronized and Lock are mechanisms for implementing thread synchronization, and their underlying implementations all depend on the underlying locking mechanism. Synchronized uses the monitor lock of the object, while Lock uses the AQS synchronizer. Lock provides more flexibility and functions than synchronized, but it is also more complicated to use. In actual development, which mechanism to choose depends on the specific needs and scenarios.

33. Talk about the underlying implementation of hashmap

HashMap is a commonly used data structure in Java, which is implemented based on a hash table (Hash Table). The following is a detailed description of the underlying implementation of HashMap:

1. Data structure:

  • HashMap is composed of arrays and linked lists (or red-black trees).
  • Array is the main body of HashMap, used to store elements.
  • A linked list (or red-black tree) solves the problem of hash collisions. When multiple elements are mapped to the same array position, they will be stored in this position in the form of a linked list (or red-black tree).

2. Hash algorithm:

  • When inserting an element into the HashMap, the position of the element in the array will be determined according to the hash code of the element (calculated by the hashCode() method).
  • HashMap uses a hash algorithm to map the hash code of the element to the index position of the array.
  • The goal of the hash algorithm is to make the elements evenly distributed in the array and reduce the probability of hash collisions.

3. Resolve hash conflicts:

  • When multiple elements are mapped to the same array location, they will be stored in this location in the form of a linked list (or red-black tree).
  • In JDK 8 and before, a linked list was used to solve the problem of hash conflicts.
  • After JDK 8, when the length of the linked list exceeds a certain threshold (8 by default), the linked list will be automatically converted into a red-black tree to improve search efficiency.

4. Expansion:

  • When the number of elements in the HashMap exceeds 75% of the array capacity, the expansion operation will be triggered.
  • The expansion operation will create a new array and reallocate the elements in the original array to the new array.
  • The expansion operation will cause the elements in the original array to recalculate the hash code and position to ensure that the elements are evenly distributed in the new array.

In general, the underlying implementation of HashMap is implemented through a combination of arrays and linked lists (or red-black trees). It uses a hash algorithm to map the hash code of the element to the index position of the array, and uses a linked list (or red-black tree) to solve the problem of hash collisions. When inserting, searching and deleting elements, HashMap will calculate the array position according to the hash code of the element, and operate on this position. When the number of elements exceeds a certain threshold, HashMap will automatically expand to ensure that the elements are evenly distributed in the array. This can improve the efficiency of HashMap's insertion, search and deletion operations.

34. Talk about the underlying implementation of Java serialization

Java's serialization is a process of converting an object into a byte stream so that the object can be transmitted over the network or persisted to disk. Java provides two serialization methods: default serialization and custom serialization.

默认序列化是指当一个类实现了java.io.Serializable接口时,Java会自动进行序列化和反序列化操作。在默认序列化过程中,Java会将对象的状态以字节流的形式写入到输出流中,而反序列化则是将字节流重新转换为对象的过程。

Java的默认序列化底层实现主要涉及以下几个类和接口:

  1. java.io.ObjectOutputStream:该类是对象输出流,用于将对象序列化为字节流。它通过调用对象的writeObject()方法将对象的状态写入输出流中。
  2. java.io.ObjectInputStream:该类是对象输入流,用于将字节流反序列化为对象。它通过调用对象的readObject()方法将字节流转换为对象的状态。
  3. java.io.Serializable接口:该接口是一个标记接口,用于标识一个类可以进行序列化。实现了该接口的类必须提供一个无参的构造方法,并且所有非瞬态(transient)的字段都会被序列化。

在序列化过程中,Java会对对象的每个字段进行递归处理。对于基本类型和字符串类型的字段,Java会直接将其写入字节流中。对于引用类型的字段,Java会将其引用的对象进行递归序列化。

自定义序列化是指通过实现java.io.Externalizable接口来自定义对象的序列化和反序列化过程。与默认序列化不同,自定义序列化需要手动实现writeExternal()readExternal()方法来控制对象的序列化和反序列化过程。

总的来说,Java的序列化底层实现主要依赖于对象输出流和对象输入流,通过将对象的状态转换为字节流进行序列化,以及将字节流转换为对象的状态进行反序列化。

35、说说MySQL的底层实现

MySQL是一种关系型数据库管理系统,其底层实现涉及多个组件和技术。

  1. 存储引擎:MySQL支持多个存储引擎,如InnoDB、MyISAM、Memory等。存储引擎负责数据的存储和检索。其中,InnoDB是MySQL的默认存储引擎,它支持事务、行级锁和崩溃恢复等特性,适用于高并发和高可靠性的场景。
  2. File system : MySQL uses a file system to manage data files. Each database corresponds to a folder, and each table corresponds to one or more files. Data files include table structures, indexes, data and logs, etc.
  3. Query optimizer : MySQL's query optimizer is responsible for parsing SQL statements and generating optimal query plans. It will choose the best execution path based on table statistics and indexes to improve query performance.
  4. Query execution engine : MySQL's query execution engine is responsible for executing query plans and returning results. Different storage engines have different query execution engines. For example, InnoDB uses B+ tree index for data retrieval, and MyISAM uses hash index and full-text index.
  5. Locks and concurrency control : MySQL uses locks and concurrency control mechanisms to ensure data consistency and correctness of concurrent access. InnoDB uses row-level locks to implement highly concurrent read and write operations, and provides multi-version concurrency control (MVCC) to resolve read and write conflicts.
  6. Log system : MySQL's log system includes transaction logs (redo log) and binary logs (binlog). Transaction logs are used for crash recovery and transaction persistence, and binary logs are used for master-slave replication and data recovery.
  7. Cache Management : MySQL uses caching to improve query performance. Among them, the query cache is used to cache query results to improve the response speed of the same query. However, in a high-concurrency environment, query caching may cause performance degradation, so it has been abandoned in the latest MySQL version.

In general, the underlying implementation of MySQL includes components and technologies such as storage engine, file system, query optimizer, query execution engine, lock and concurrency control, log system, and cache management. These components and technologies work together to enable MySQL to provide high-performance, reliable and scalable database services.

Nien reminded that if you want to thoroughly grasp the underlying implementation of mysql, you can follow the video "Starting from 0, handwriting mysql step by step" by the Nien architecture team, and write your own mysql by hand to gain an in-depth understanding of mysql.

36. Talk about the general logic of the underlying implementation of Spring IOC, AOP, and MVC

The Spring Framework is an open source Java enterprise application development framework that provides a lightweight, non-intrusive solution for building enterprise applications. The core of the Spring framework is the Spring IOC (Inversion of Control, inversion of control) container, Spring AOP (Aspect-Oriented Programming, aspect-oriented programming) and Spring MVC (Model-View-Controller, model-view-controller).

1. The underlying implementation logic of Spring IOC:

  • The Spring IOC container is responsible for managing and organizing the creation and lifecycle of objects (also known as beans) in the application. It uses the method of inversion of control to hand over the creation of objects and the management of dependencies to the container.
  • The underlying implementation logic of Spring IOC includes: defining bean configuration metadata (usually using XML or annotations), parsing configuration metadata, creating bean instances, handling dependencies between beans, and managing bean lifecycles.
  • When the application starts, the Spring IOC container will read the bean information defined in the configuration file or annotation, create the corresponding bean instance according to the configuration information, and store it in the container. When other components need to use these beans, the container will inject the relevant beans where they are needed through dependency injection.

2. The underlying implementation logic of Spring AOP:

  • Spring AOP is a technology based on aspect-oriented programming, which realizes code modularization and reuse by separating cross-cutting concerns (such as logs, transactions, security, etc.) from business logic.
  • The underlying implementation logic of Spring AOP includes: defining pointcuts (specify on which methods to apply aspect logic), defining aspects (codes containing aspect logic), weaving aspects into target objects, etc.
  • At runtime, Spring AOP weaves the aspect logic into the method call of the target object through dynamic proxy technology. When the method of the target object is called, the aspect logic will be executed before and after the method to realize the function of cross-cutting concerns.

3. The underlying implementation logic of Spring MVC:

  • Spring MVC is a web application framework based on the MVC design pattern. It realizes the decoupling of business logic and interface display by separating different layers of the application (model, view, controller).
  • Spring MVC的底层实现逻辑包括:定义请求映射规则、处理请求和响应、调用业务逻辑处理器、渲染视图等。
  • 在应用程序启动时,Spring MVC容器会初始化并加载配置信息,包括请求映射规则、视图解析器等。当接收到客户端的请求时,容器会根据请求映射规则找到对应的处理器,并调用相应的方法进行请求处理。处理器将处理结果返回给容器,容器再根据视图解析器将结果渲染成最终的视图,最后返回给客户端。

总结来说,Spring框架的底层实现逻辑包括配置元数据的解析、对象的创建和管理、依赖注入、动态代理等技术的应用。通过这些技术,Spring实现了控制反转、面向切面编程和基于MVC的Web应用程序开发。这些底层实现逻辑使得开发人员可以更加专注于业务逻辑的实现,提高了代码的可维护性和可扩展性。

37、大致说下你熟悉的框架中用到的设计模式

在Java开发中,常用的框架中使用了许多设计模式。下面是一些常见的Java框架和它们所使用的设计模式的示例:

一、Spring框架:

  • 依赖注入(Dependency Injection):Spring框架使用了依赖注入设计模式,通过注入对象的方式来实现解耦和灵活性。
  • 单例模式(Singleton):Spring框架中的Bean默认是单例的,通过单例模式确保在整个应用中只有一个实例。

二、Apache Kafka:

  • 发布-订阅模式(Publish-Subscribe):Kafka使用发布-订阅模式来实现消息的传递和处理,生产者将消息发布到主题(Topic)上,消费者订阅主题并接收消息。
  • 适配器模式(Adapter):Kafka提供了各种适配器,用于与不同的数据源进行交互,如Kafka Connect用于与外部系统进行数据交换。

三、MyBatis框架:

  • Data Access Object (DAO) : The MyBatis framework uses the DAO design pattern to provide a simplified data access interface by encapsulating database operations.

Four, Spring Boot framework:

  • Builder mode (Builder) : Spring Boot uses the builder mode to build the configuration and environment of the application, creating and configuring objects through chain calls.
  • Facade : The Spring Boot framework provides simplified configuration and automated functions, hiding the underlying complexity and making it easier for developers to use and manage applications.

These are just some common examples. In fact, there are many design patterns used in Java frameworks. Different frameworks may use different design patterns to solve specific problems. The use of design patterns can improve code maintainability, flexibility, and scalability, making the development process more efficient and standardized.

38. Talk about the design patterns used in the project

In common Java projects, we often use the following design patterns:

  1. Singleton Pattern : Used to ensure that a class has only one instance and provide a global access point. Common application scenarios include database connection pools, thread pools, etc.
  2. Factory Pattern : An interface for creating objects, but deferring the concrete instantiation logic to subclasses. Common application scenarios include log library, database driver, etc.
  3. Observer Pattern : Defines a one-to-many dependency relationship between objects, so that when an object changes state, all objects that depend on it will be notified and automatically updated. Common application scenarios include event listeners, message queues, etc.
  4. Adapter Pattern (Adapter Pattern) : Convert the interface of a class into another interface expected by the client, so that classes that could not work together due to incompatible interfaces can work together. Common application scenarios include adapting different versions of APIs to a unified interface.
  5. Strategy Pattern (Strategy Pattern) : Defines a series of algorithms and encapsulates each algorithm so that they can replace each other. Common application scenarios include sorting algorithms, payment methods, etc.
  6. Template Method Pattern : Defines the skeleton of an algorithm, deferring the implementation of some steps to subclasses. Common application scenarios include lifecycle methods in the framework, algorithm flow, etc.
  7. Decorator Pattern : Dynamically add additional responsibilities to an object without changing its interface. Common application scenarios include wrapper classes for IO streams, logging, etc.
  8. Builder Pattern : Separate the construction process of a complex object from its representation, so that the same construction process can create different representations. Common application scenarios include building complex data objects, configuration objects, etc.
  9. Iterator Pattern : Provides a way to sequentially access the elements of an aggregate object without exposing its internal representation. Common application scenarios include traversal and search of collection classes.
  10. Proxy Pattern : Provide a proxy for other objects to control access to this object. Common application scenarios include remote agents, virtual agents, etc.

These design patterns are proposed to solve specific problems and are widely used in practical projects. Different design patterns can be selected according to specific needs to improve the readability, maintainability and scalability of the code.

39. Talk about the main components of Netty

Netty is a high-performance network programming framework that provides a set of powerful, easy-to-use components and tools for building scalable, high-performance network applications. The main components of Netty include:

  1. Channel (channel) : Channel is the core abstraction of Netty, which represents a network connection and can be used to read and write data. Channel provides asynchronous, event-driven I/O operations.
  2. EventLoop (event loop) : EventLoop is Netty's event processing mechanism, which is responsible for handling all I/O events and executing tasks. Each Channel is bound to an EventLoop for handling all events on the Channel.
  3. ChannelHandler (channel processor) : ChannelHandler is Netty's event processor, which is responsible for processing various events on the Channel, such as connection establishment, data reading and writing, etc. Developers can handle specific business logic by implementing a custom ChannelHandler.
  4. ChannelPipeline (channel pipeline) : ChannelPipeline is the container of ChannelHandler, which is responsible for managing and scheduling the execution order of ChannelHandler. When an event is triggered, it will start from the head of the ChannelPipeline and call the event processing method of each ChannelHandler in turn.
  5. Codec (codec) : Codec is Netty's codec, which is responsible for converting data between the network and the application. Netty provides a series of codecs, including length-based decoders, string codecs, object serialization codecs, and more.
  6. Bootstrap (guidance class) : Bootstrap is Netty's bootstrap class for configuring and starting Netty applications. It provides a set of methods to set various parameters such as event loop group, Channel type, ChannelHandler, etc.
  7. ChannelFuture (channel Future) : ChannelFuture is the result of Netty's asynchronous operation, which represents an operation that has not yet been completed. Through ChannelFuture, you can get the result of the operation, add listeners, etc.

These components together constitute the core architecture of Netty, and through their collaborative work, you can easily build high-performance, scalable network applications.

Nien, a 40-year-old architect, reminded : Netty is not only the absolute focus of the interview, but also the absolute difficulty of the interview.

It is recommended that you have an in-depth and detailed grasp. For specific content, please refer to the PDF of "Nin Java Interview Collection - Topic 25: Netty Interview Questions". This topic has a systematic, systematic, and comprehensive introduction to Netty.

If you want to put Netty combat into your resume, you can ask Nien for guidance.

40. When using dubbo to make remote calls, the consumer needs several threads

When using Dubbo for remote calls, the consumer needs to use multiple threads to handle different tasks. Specifically, the consumer needs to use the following threads:

  1. Main thread : The main thread is responsible for receiving requests from consumers and sending them to providers. The main thread is also responsible for receiving the provider's response and returning the response to the consumer's caller.
  2. IO thread : The IO thread is responsible for processing network IO operations, including sending requests and receiving responses. These threads usually use NIO (non-blocking IO) technology, which can efficiently handle a large number of concurrent requests.
  3. Thread pool : The consumer can also configure a thread pool to process the business logic of the consumer. When the consumer receives the response from the provider, it can hand the response to threads in the thread pool for processing. This can avoid blocking the main thread and improve the overall concurrent processing capability.

It should be noted that the number of threads can be configured according to specific needs. Usually, the size of the thread pool can be determined according to the load and performance requirements of the consumer. Larger thread pools can handle more concurrent requests, but also consume more system resources. Therefore, it needs to be weighed and adjusted according to the actual situation.

To sum up, when using Dubbo for remote calls, the consumer needs the main thread, IO thread and a thread pool to process requests and responses, and the concurrent processing capability can be configured according to actual needs

41. Talk about memory allocation and optimization

Memory allocation is an important link in a computer system, it involves how to allocate and manage memory resources for programs. The following is a detailed description of memory allocation and optimization:

1. Memory allocation method:

  • Stack : The stack is a memory area used to store local variables and function call information. Its allocation and release is done automatically by the compiler, which has a faster allocation and release speed, but a smaller capacity.
  • Heap : The heap is an area used to store dynamically allocated memory objects. Its allocation and release are manually controlled by the developer, and it has a large capacity, but the allocation and release speed is slow.
  • Static storage area (Static Storage) : The static storage area is used to store global variables and static variables. It is allocated when the program starts and is not released until the end of the program.

2. Memory allocation optimization:

  • Use an appropriate data structure : Choosing an appropriate data structure for your problem can reduce memory usage. For example, using an array instead of a linked list can reduce the overhead of pointers.
  • Avoid memory fragmentation : Memory fragmentation refers to the existence of some scattered unused space in memory. Memory fragmentation can be reduced by using memory pools or memory allocation algorithms (such as allocators).
  • Release unused memory in time : When an object is no longer needed, the memory occupied by it is released in time so that other objects can use the memory.
  • Use object pooling : Object pooling is a technique that pre-creates and stores multiple objects in memory. By reusing these objects, the overhead of memory allocation and deallocation can be reduced.
  • Compressed memory : Some compression algorithms can compress data in memory, thereby reducing memory usage. For example, using Huffman coding or dictionary compression algorithms.

3. Notes on memory allocation:

  • Avoid memory leaks : A memory leak refers to a program that does not release memory properly after using it, making the memory no longer usable by other objects. It is necessary to pay attention to releasing unused memory in time to prevent memory leaks.
  • Avoid memory overflow : Memory overflow means that the memory requested by the program exceeds the memory resources that the system can provide. It is necessary to reasonably estimate the memory usage of the program, and do a good job in memory management and optimization to avoid memory overflow problems.

To sum up, memory allocation and optimization is a process that comprehensively considers performance and resource utilization. By selecting appropriate data structures, releasing memory in time, and using object pools and other technical means, the memory usage efficiency and performance of the program can be improved. At the same time, you need to pay attention to avoiding problems such as memory leaks and memory overflows to ensure the stability and reliability of the program.

42. How do you prevent people from repeatedly swiping coupons?

To prevent coupons from being swiped repeatedly, the following methods can be considered:

  1. Limit the number of times of use : In the design of the coupon, a limit on the number of times of use can be set. Every time a coupon is used, the system checks to see if the coupon has been used the limit and rejects it if it has been used.
  2. Bind user information : Bind the coupon with the user to ensure that each user can only use it once. When a user uses a coupon, the system will verify whether the user has already used the coupon, and if it has been used, it will refuse to use it again.
  3. Design validity period : Set a validity period for coupons to ensure that they can only be used within the validity period. When a user uses a coupon, the system will verify whether the current time is within the validity period of the coupon, and if not, refuse to use it.
  4. Use unique identifiers : Generate a unique identifier for each coupon, store it in a database or cache. When the user uses the coupon, the system will check whether the identifier has been used, and if it has been used, it will refuse to use it again.
  5. Prevent malicious swiping of coupons : monitor system logs to detect abnormal behavior. For example, detect the frequent use of coupons by the same user in a short period of time, or detect the simultaneous use of coupons by multiple users under the same IP address. If abnormal behavior is found, corresponding measures can be taken, such as temporarily prohibiting the user from using coupons.

It is necessary to select the appropriate anti-repeat coupon coupon method according to the specific business scenario and system architecture, and use it in a reasonable combination.

For details, please refer to Nien’s special article: Meituan is too ruthless: the interface is maliciously flashed 10Wqps, how to prevent it?

43. There is an integer array, the array elements are not repeated, and the array elements are in ascending order first

To implement an integer array in which the array elements are not repeated and are arranged in ascending order, the following methods and principles can be used:

  1. Create an empty integer array to store the final result.
  2. Iterates over the given integer array.
  3. For each element, check whether it already exists in the resulting array.
  4. If not present, the element is inserted into the correct position in the resulting array, maintaining ascending order.
  5. Returns the resulting array as the final result.

Here is a sample code implemented in Java:

import java.util.Arrays;

public class SortedArray {
    
    
    public static int[] createSortedArray(int[] arr) {
    
    
        int[] result = new int[arr.length];
        int index = 0;

        for (int i = 0; i < arr.length; i++) {
    
    
            if (index == 0 || arr[i] > result[index - 1]) {
    
    
                result[index++] = arr[i];
            }
        }

        return Arrays.copyOf(result, index);
    }

    public static void main(String[] args) {
    
    
        int[] arr = {
    
    1, 3, 2, 5, 4};
        int[] sortedArr = createSortedArray(arr);
        System.out.println(Arrays.toString(sortedArr));
    }
}

In the above code, we use createSortedArraythe method to create an array of integers sorted in ascending order. We use resultan array to store the final result and indexa variable to keep track of the current index position of the result array. We iterate through the given array, and if the current element is greater than the last element in the result array, we insert it at the correct position in the result array, incrementing it index. Finally, we use Arrays.copyOfmethod to truncate the resulting array to actual size and return it as the final result.

In the example code, the given array is {1, 3, 2, 5, 4}, and the final result is {1, 2, 3, 4, 5}.

44. Descending order, find the maximum value

To implement descending order and find the maximum value, the following steps and methods can be used:

  1. Create an integer array.
  2. Store a set of integers in an array using a loop.
  3. Sorts an array in descending order using a sorting algorithm such as bubble sort, selection sort, or quick sort.
  4. Output the first element of the sorted array, which is the maximum value.

Here is a sample code implemented in Java:

import java.util.Arrays;

public class DescendingOrder {
    
    
    public static void main(String[] args) {
    
    
        int[] numbers = {
    
    5, 2, 9, 1, 7}; // 示例整数数组

        // 使用Arrays.sort()方法对数组进行降序排序
        Arrays.sort(numbers);
        for (int i = 0; i < numbers.length / 2; i++) {
    
    
            int temp = numbers[i];
            numbers[i] = numbers[numbers.length - 1 - i];
            numbers[numbers.length - 1 - i] = temp;
        }

        // 输出排序后的数组
        System.out.println("降序排序后的数组:");
        for (int number : numbers) {
    
    
            System.out.print(number + " ");
        }

        // 输出最大值
        System.out.println("\n最大值:" + numbers[0]);
    }
}

The sample code uses the sort() method of the Arrays class to sort the array in descending order. Then, find the first sorted element, which is the maximum value, by traversing the array. Finally, output the sorted array and the maximum value.

Note that this is just one way to do it, there are other sorting algorithms and techniques to achieve descending order and find the maximum value.

nian said at the end

In Nien's (50+) reader community, many, many small partners need to enter a big factory and get a high salary.

The Nien team will continue to combine the real interview questions of some major companies to sort out the learning path for you and see what you need to learn?

In the previous article, I used many articles to introduce the real questions of Ali, Baidu, Byte, and Didi:

" It exploded... 40 questions on Jingdong's life, and 50W+ after passing "

" Questions are numb...Ali asked 27 questions at the same time, and 60W+ after passing "

" Baidu madly asked for 3 hours, Dachang got an offer, the guy is so ruthless!" "

" Are you too ruthless: face an advanced Java, how hard and ruthless it is "

" One hour of byte madness, the guy got the offer, it's too ruthless!" "

" Accept a Didi Offer: From the three experiences of the guy, what do you need to learn? "

These real questions will be included in the most complete and continuously upgraded PDF e-book " Nin's Java Interview Collection " in history.

This article is included in the V84 edition of "Nin's Java Interview Collection".

Basically, if you thoroughly understand Nien's "Ninan Java Interview Collection", it is easy to get offers from big companies. In addition, if you have any needs in the next issue of Dachang Interview, you can send a message to Nien.

Recommended related reading

" Starting from 0, Handwriting Redis "

" Starting from 0, Handwriting MySQL Transaction ManagerTM "

" Starting from 0, Handwriting MySQL Data Manager DM "

" Tencent is too ruthless: 4 billion QQ accounts, given 1G memory, how to deduplicate? "

"Nin's Architecture Notes", "Nin's High Concurrency Trilogy", "Nin's Java Interview Collection" PDF, please go to the following official account [Technical Freedom Circle] to take it↓↓↓

Guess you like

Origin blog.csdn.net/crazymakercircle/article/details/131678859