About online interview questions, you only need to know these 12 questions

Preface

Due to space limitations, this article only selects 12 most representative interview questions on the Internet. Friends who are looking for a job or changing jobs can join the group 973961276 to exchange and learn with the technical experts, and also organize some individuals. I feel better study books, interview questions from Dachang and technical explanation videos to share with everyone for free.

Okay, let's not say much, start the text.


1. Talk about your understanding of the TCP/IP four-layer model and the OSI seven-layer model?

In order to enhance versatility and compatibility, computer networks are designed as hierarchical structures, and each layer complies with certain rules.

Therefore, with OSI, an abstract network communication reference model, computer network systems can be connected to each other according to this standard.

Physical layer : Connect computers through physical methods such as network cables and optical cables. The transmitted data is a bit stream , 0101010100.

Data link layer : First, encapsulate the bit stream into a data frame format, and group 0 and 1. After the computer is connected, the data is transmitted through the network card, and the only MAC address in the world is defined on the network card. Then the data is sent to all computers in the LAN in the form of broadcast, and then it is judged whether it is sent to itself based on the comparison between the MAC address in the data and itself.

Network layer : The form of broadcast is too inefficient. In order to distinguish which MAC addresses belong to the same subnet, the network layer defines IP and subnet mask. By AND operation of IP and subnet mask, you can know whether it is the same subnet, and then pass The router and switch perform transmission. The IP protocol is a network layer protocol.

Transport layer : After having the MAC+IP address of the network layer, in order to determine which process the data packet is sent from, a port number is required, and communication is established through the port. For example, TCP and UDP belong to this layer of protocols.

Session layer : responsible for establishing and disconnecting connections

Presentation layer : In order to make the data understandable by other computers, the data is again converted into another format, such as text, video, and pictures.

Application layer : the highest layer, facing the user, providing a computer network and the interface finally presented to the user

TCP/IP is a four-layer structure, which is equivalent to a simplification of the OSI model.

  1. The data link layer is also called the network access layer and the network interface layer. It includes the physical layer and data link layer of the OSI model to connect computers.
  2. The network layer, also called the IP layer, handles the transmission and routing of IP data packets and establishes communication between hosts.
  3. The transport layer is to provide end-to-end communication for two host devices.
  4. The application layer, including the session layer, presentation layer and application layer of OSI, provides some commonly used protocol specifications, such as FTP, SMPT, HTTP, etc.

In summary, the physical layer connects computers by physical means, the data link layer groups the data of the bit stream, the network layer establishes host-to-host communication, the transport layer establishes port-to-port communication, and the application layer is ultimately responsible The connection is established, the data format is converted, and finally presented to the user.

2. Talk about the TCP 3-way handshake process?

The server needs to monitor the port before establishing a connection, so the initial state is LISTEN.

  1. The client side establishes a connection, sends a SYN synchronization packet, and the status becomes SYN_SENT after sending
  2. After the server receives the SYN, it agrees to establish a connection, returns an ACK response, and also sends a SYN packet to the client. After the transmission is completed, the status becomes SYN_RCVD
  3. After the client receives the ACK from the server, the state changes to ESTABLISHED and returns an ACK to the server. After the server receives it, the status changes to ESTABLISHED, and the connection is established.

3. Why do we need 3 times? 2 times, 4 times is not enough?

Because TCP is a duplex transmission mode and does not distinguish between client and server, the establishment of a connection is a two-way process.

If there are only two times, the two-way connection cannot be established. It can be seen from the SYN and ACK that the server responds to the establishment of the connection that the SYN and ACK are merged into one, and he does not need 4 times.

Why do you wave your hand four times? Because waved ACK and FIN cannot be sent at the same time, because the deadline for data transmission is different.

4. What about the process of four waves?

  1. The client sends a FIN packet to the server and enters the FIN_WAIT_1 state, which means that the client has no data to send
  2. After the server receives it, it returns an ACK and enters the CLOSE_WAIT state waiting to be closed, because the server may still have data that has not been sent.
  3. After the server side data is sent, the server side sends a FIN to the client and enters the LAST_ACK state
  4. After the client receives the ACK, it enters the TIME_WAIT state and replies to the ACK at the same time. The server directly enters the CLOSED state after receiving the ACK, and the connection is closed. But the client has to wait for 2MSL (maximum survival time of message) time before entering the CLOSED state.

5. Why wait for the time of 2MSL to close?

  1. In order to ensure the reliable closure of the connection. If the server does not receive the last ACK, it will resend the FIN.
  2. In order to avoid data confusion caused by port reuse. If the client directly enters the CLOSED state and uses the same port number to establish a connection to the server, part of the data from the previous connection delays to reach the server in the network, and the data may be confused.

6. How does TCP ensure the reliability of the transmission process?

Checksum : The sender calculates the checksum before sending the data, and the receiver calculates the same after receiving the data. If it is inconsistent, the transmission is wrong.

Confirmation response, sequence number : The data is numbered when TCP is transmitting, and each time the receiver returns an ACK, there is an acknowledgment sequence number.

Timeout retransmission : If the sender does not receive an ACK after sending the data for a period of time, the data is retransmitted.

Connection management : the process of three handshake and four wave of hands.

Flow control : The TCP protocol header contains a 16-bit window size. The receiver will fill in its own instant window when returning an ACK, and the sender will control the sending speed according to the size of the window in the message.

Congestion control : When data is first sent, the congestion window is 1, and every time an ACK is received in the future, the congestion window will be +1, and then the congestion window and the received window will be smaller as the actual sending window. If timeout occurs Retransmit and reset the congestion window to 1. The purpose of this is to ensure the efficiency and reliability of the transmission process.

7. What is the process of a browser requesting a URL?

  1. First, resolve the domain name into an IP address through the DNS server, and judge whether it belongs to the same subnet through the IP and subnet mask
  2. Construct application layer request http message, add TCP/UDP header in transport layer, add IP header in network layer, add Ethernet protocol header in data link layer
  3. The data is forwarded by routers and switches, and finally reaches the target server. The target server also parses the data, and finally gets the http message, and returns it according to the logical response of the corresponding program.

8. Do you know how HTTPS works?

  1. The user requests the https website through the browser, the server receives the request, selects the encryption and hash algorithm supported by the browser, and returns the digital certificate to the browser, including the issuing authority, URL, public key, and certificate validity period.
  2. The browser verifies the content of the certificate, and if there is a problem, it will prompt a warning. Otherwise, a random number X is generated, encrypted with the public key in the certificate, and sent to the server.
  3. After the server receives it, it uses the private key to decrypt it to get the random number X, then uses X to encrypt the web page content and return it to the browser
  4. The browser uses X and the previously agreed encryption algorithm to decrypt to get the final web content

9. What are the ways to implement load balancing?

DNS : This is the simplest method of load balancing. It is generally used to achieve geographic-level load balancing. Users in different regions can return different IP addresses through DNS resolution. This method of load balancing is simple, but the scalability is too poor , The control is in the domain name service provider.

Http redirection : achieve the purpose of load balancing by modifying the Location of the Http response header, the 302 redirection of Http. This method has an impact on performance and increases request time.

Reverse proxy : A model that acts on the application layer, also known as seven-layer load balancing , such as the common Nginx, whose performance can generally reach ten thousand. This method is simple to deploy, low cost, and easy to expand.

IP : The mode that acts on the network layer and the transport layer, also known as the four-layer load balancing , achieves the effect of load balancing by modifying the IP address and port of the data packet. The common one is LVS (Linux Virtual Server), which usually supports 100,000 concurrency in performance.

According to the type, it can also be divided into DNS load balancing, hardware load balancing, and software load balancing.

Among them, hardware load balancing is expensive and has the best performance, which can reach millions. Software load balancing includes Nginx and LVS.

10. Tell me about the difference between BIO/NIO/AIO?

BIO : Synchronous blocking IO, for each client connection, the server will correspond to a processing thread, and connections that are not allocated to processing threads will be blocked or rejected. It is equivalent to one connection and one thread .

NIO : Synchronous non-blocking IO, based on the Reactor model, the client communicates with the channel, and the channel can perform read and write operations, poll the channels registered on it through the multiplexer selector, and then perform IO operations. In this case, it is enough to use another thread to process the IO operation, that is, one thread per request .

AIO : Asynchronous non-blocking IO, which is a step further than NIO. The operating system completes the request processing, and then informs the server to start the thread for processing, so it is an effective request for one thread .

11. How do you understand synchronization and blocking?

First, it can be considered that an IO operation consists of two parts:

  1. Initiate IO request
  2. Actual IO read and write operations

Synchronization and asynchrony are the second, the actual IO read and write operations. If the operating system completes for you and then informs you, it is asynchronous, otherwise it is called synchronous.

Blocking and non-blocking are the first one, initiating an IO request. For NIO, after initiating an IO operation request through a channel, it actually returns, so it is non-blocking.

12. Talk about your understanding of the Reactor model?

The Reactor model contains two components:

  1. Reactor: Responsible for querying and responding to IO events. When IO events are detected, they are distributed to Handlers for processing.
  2. Handler: bound to IO events, responsible for the processing of IO events.

It contains several implementation methods:

Single thread Reactor

In this mode, reactor and handler are in a thread, if one handler is blocked, it will cause all other handlers to be unable to execute, and the performance of multi-core cannot be fully utilized.

Single Reactor multi-threaded

Since the operations of decode, compute, and encode are not IO operations, the idea of ​​multi-threaded Reactor is to give full play to the characteristics of multi-core and at the same time separate non-IO operations.

However, a single Reactor is responsible for all event monitoring and response work. If there are too many connections, there may still be performance problems.

Multi-Reactor multi-thread

In order to solve the performance problem of a single Reactor, a multi-Reactor model was created. The mainReactor establishes a connection, and multiple subReactors are responsible for data reading and writing.

 

- END -

 

Guess you like

Origin blog.csdn.net/linuxguitu/article/details/111867943