Basic knowledge of computer network and interview summary-this should be the most comprehensive

Basic knowledge of computer network and interview summary

After the original compilation, after many interviews, I found that many of the contents were redundant and impractical, and in addition to the missing parts, I updated this blog, especially adding the calculation of IP addresses, and made an honorable debut~

1 Basic concepts

1.1 TCP/IP protocol stack, OSI reference model

Insert image description here

Insert image description here

1.2 Briefly introduce the functions of each layer

  • The application layer is user-oriented software that provides services under various protocols. Divided into server and client. Users control the software, issue relevant service requests, and form corresponding processes and data. There are many application layer protocols in the Internet, such as Domain Name System DNS , HTTP protocol that supports World Wide Web applications , SMTP protocol that supports email, and so on. We call the data unit of application layer interaction a message .
  • The computer is identified by a port number and enters the transport layer to send based on the relevant TCP/UDP protocol . Provides general data transfer services for communication between two host processes
  • The network layer selects relevant routes . The task of the network layer is to select appropriate inter-network routing and switching nodes to ensure timely transmission of data. When sending data, the network layer encapsulates the message segments or user datagrams generated by the transport layer into packets and packages for transmission. In the TCP/IP architecture, since the network layer uses the IP protocol , the packet is also called IP datagram , or datagram for short .
  • The data link layer transmits data on the line . When transmitting data between two adjacent nodes, the data link layer assembles the IP datagrams handed over by the network layer into frames . The link between the two adjacent nodes transmit the frame. Each frame includes data and necessary control information (such as synchronization information, address information, error control, etc.).
  • Finally, the physical layer implements the final interaction and realizes the transparent transmission of bit streams between adjacent computer nodes, shielding the differences in specific transmission media and physical devices as much as possible.

Insert image description here

1.3 Common protocols

  • Application layer: common protocols:
    • HTTP (port 80): Hypertext Transfer Protocol
    • DNS (port 53): running on UDP, domain name resolution service
    • FTP (port 21): File Transfer Protocol
    • SSH (port 22): remote login
    • TELNET (port 23): remote login
    • SMTP (port 25): send email
    • POP3 (port 110): receive emails
  • Transport layer: TCP/UDP, SSL protocol** (https is encrypted based on this)**
  • Network layer: IP, ARP, RIP, ICMP** (ping protocol)**

2 application layer

2.1 What are the common status codes for HTTP requests?

  1. 2xx status code: The operation was successful. 200 OK
  2. 3xx status code: Redirect. 301 permanent redirect; 302 temporary redirect
  3. 4xx status code: Client error. 400 Bad Request; 401 Unauthorized; 403 Forbidden; 404 Not Found;
  4. 5xx status code: Server error. 500 Server Internal Error; 501 Service Unavailable, 502 Gateway Error, 504 Gateway Timeout

2.2 Common HTTP methods

Insert image description here

What is the difference between GET and POST?

GET "reads" a resource. For example, Get an html file. Repeated reads should have no side effects on the data being accessed. post , such as submitting a form, will generally have an impact and cannot be cached

  • GET is idempotent and reads the same resource to obtain the same data. POST is not idempotent; GET is harmless when the browser rolls back, while POST will submit the request again.

  • GET is generally used to obtain/query resource information, while POST is generally used to update resource information.

  • GET parameters are passed through the URL, and POST is placed in the Request body.

  • Security: GET request parameters will be completely retained in the browser history, while parameters in POST will not be retained.

  • GET only allows ASCII characters, POST has no requirements for data type

  • There is a limit on the length of GET (operating system or browser), but there is no limit on the size of POST data.

2.3 How to establish requests between http client and server

1. TCP three-way handshake to establish connection

2. The web browser sends a request command to the web server , for example: GET/sample/hello.jsp HTTP/1.1

3. The web browser sends request header information to describe the browser itself. Then the browser sends a blank line to notify the server, indicating that it has finished sending the header information. If it is a post request, it will also send the request header information after it has finished sending it. Then send the request body.

4. Web server response, HTTP/1.1 200 OK protocol version number and status code + description

5. The Web server sends the response header information , and the server will also send data about itself and the requested document to the user along with the response.

6. The web server sends data to the browser and Content-Typesends the actual data requested by the user in the format described by the response header information.

7. The web server closes the TCP connection

Insert image description here

2.4 The difference between http and https

Insert image description here

2.5 What is the difference between Session and Cookie?

Session is a solution for maintaining state on the server side, and Cookie is a solution for maintaining state on the client side.

When the browser opens a web page, it uses the HTTP protocol, which is stateless. This request has nothing to do with the previous request. They do not know each other and are not related. The advantage is that it is fast, but it is inconvenient, so Cookie appeared based on this demand.

Cookie

  • Technology that helps web sites retain visitor information, small files stored on the user's computer, and save some of the site's user data.
  • It can be set through the HTTP response header set cookie. Cookies are organized and saved according to the website. After being saved, the request sent by the browser to the website will carry these cookies, and then the background can analyze these cookies.
  • For example: login without username . After the first login is successful, the server sends a cookie with the username and password to the browser, and the browser stores it. The next time the browser wants to log in again, it sends the cookie to the server when sending the request header, and the server parses it and fills it in directly. Enter the username and password to achieve username-free login.

Although cookies are good, they exist on the user side. The storage size is limited, visible to the user, and can be modified at will, which is very unsafe. So how can we make it safe and convenient to read information globally? So, at this time, a new storage session mechanism: Session was born.

Session: the process from opening the browser to closing the browser

When the client browser accesses the server, the server records the client information on the server in some form. This is Session. When the client browser visits again, it only needs to find the customer's status from the Session.

  • Session is a server-side storage space maintained by the application server. When a user connects to the server, a unique SessionID will be generated by the server, and the SessionID is used as an identifier to access the server-side Session storage space.
  • The data of SessionID is saved to the client and saved with Cookie. When the user submits the page, the SessionID will be submitted to the server to access the Session data.

2.6 What is the process from entering the URL to getting the page (the more detailed the better)?

  1. Enter the www.baidu.com domain name in the browser. The operating system will first check whether its local hosts file has a mapping relationship with this domain name. If so, it will first call the IP address mapping to complete the domain name resolution.
  2. If it is not in the hosts file, the local DNS resolver cache is queried, and if it is, the address resolution is completed.
  3. If there is no local DNS resolver cache, search the local DNS server. If found, complete the resolution.
  4. If not, the local server will initiate a query request to the root domain name server. The root domain name server will tell the local domain name server which top-level domain name server to query.
  5. The local domain name server initiates a query request to the top-level domain name server, and the top-level domain name server tells the local domain name server which authority domain name server to find.
  6. The local domain name server initiates a query request to the authority domain name server, and the authority domain name server tells the local domain name server the IP address corresponding to www.baidu.com.
  7. The local domain name server tells the host the IP address corresponding to www.baidu.com.

Insert image description here

2.7 Let’s talk about https authentication and authorization

Hypertext Transfer Protocol (HTTP) is a protocol used to transmit and receive information over the Internet. HTTP uses a request/response process so information can flow between servers quickly, but HTTP is insecure and can easily eavesdrop on the data being transmitted between you and the web server.

HTTPS is a Hypertext Transfer Protocol based on Secure Sockets Layer. HTTPS uses the Secure Sockets Layer as a sub-layer on top of the HTTP application layer.

  1. Symmetric encryption algorithm, encryption and decryption use the same key

If I sent you the key and it was intercepted by him, wouldn't the encryption be in vain? The key required by this encryption and decryption algorithm must be known by both parties, but the key cannot be sent over the network.

  1. Asymmetric encryption algorithm has a pair of keys: private key + public key. Data encrypted with the private key can only be decrypted by the corresponding public key. Data encrypted with the public key can only be decrypted by the corresponding private key.

Insert image description here

3. Man-in-the-middle attack . The asymmetric encryption algorithm is much slower than the previous symmetric key algorithm. Therefore, consider using only the asymmetric encryption algorithm to transmit the secret key of the symmetric algorithm, and then use the secret key of the symmetric algorithm for encrypted transmission. But there is a problem

Insert image description here

4. So now the question becomes how to prove that the public key belongs to the other party .

In the online world, such a credible certification center can also be established. This center can issue a certificate to everyone to prove a person's identity.

5. But how to transmit the certificate securely? What if the certificate is tampered with during delivery?

Bill can use a Hash algorithm to generate a message digest using his public key and personal information, but if the entire message digest is replaced, it will not work.

Insert image description here

6. " Digital Certificate "

The certification authority ( CA for short ) uses its private key to encrypt the message digest to form a signature: It also merges the original information and the data signature to form a brand new thing called a " digital certificate ". When Bill sent his certificate to me At that time, I used the same Hash algorithm to generate the message digest again, and then used the CA's public key to decrypt the digital signature to get the message digest created by the CA. By comparing the two, I will know whether anyone has tampered with it!

Insert image description here

3 transport layer

The transport layer provides services between application processes and between the sending port and the receiving port. It needs to perform error checking, flow control, connection-oriented or connectionless services.

  • User Datagram Protocol UDP: No connection established, best effort delivery, no guarantee of reception

  • Data transmission protocol TCP: The network does its best to deliver data, but it does not guarantee that it will not be lost, does not guarantee the order, and does not guarantee delivery ; when network congestion occurs, some packets may be discarded ;

3.1 Three stages of TCP transmission connection: connection establishment, data transmission, connection release

3.1.1 Three-way handshake to establish connection

Insert image description here

第一次握手:Client将SYN置1,随机产生一个初始序列号seq发送给Server,进入SYN_SENT状态;
第二次握手:Server收到Client的SYN=1之后,知道客户端请求建立连接,将自己的SYN置1,ACK置1,产生一个acknowledge number=sequence number+1,并随机产生一个自己的初始序列号,发送给客户端;进入SYN_RCVD状态;
第三次握手:客户端检查acknowledge number是否为序列号+1,ACK是否为1,检查正确之后将自己的ACK置为1,产生一个acknowledge number=服务器发的序列号+1,发送给服务器;进入ESTABLISHED状态;服务器检查ACK为1和acknowledge number为序列号+1之后,也进入ESTABLISHED状态;完成三次握手,连接建立。

Can TCP handshake twice when establishing a connection? Why?

  • First, it may happen that the invalid connection request segment is transmitted to the server again.

The first connection request message segment sent by the client was not lost, but stayed at a certain network node for a long time, so that it was delayed to reach the server some time after the connection was released. It turns out that this is a message segment that has long since expired. However, after the server receives this invalid connection request segment, it mistakenly thinks that it is a new connection request sent by the client again. So a confirmation message segment is sent to the client, agreeing to establish the connection. Assuming that the "three-way handshake" is not used, as long as the server sends a confirmation, a new connection is established. Since the client has not issued a request to establish a connection, it will not pay attention to the server's confirmation and will not send data to the server. But the server thinks that a new transport connection has been established, and has been waiting for the client to send data. In this way, many resources of the server are wasted. The "three-way handshake" method can prevent the above phenomenon from happening. For example, in the situation just now, the client will not send a confirmation to the server's confirmation. Since the server does not receive confirmation, it knows that the client did not ask to establish a connection.

  • Secondly, the two handshakes cannot guarantee that the Client correctly receives the message of the second handshake (the Server cannot confirm whether the Client has received it), nor can it guarantee that the initial sequence number is successfully exchanged between the Client and the Server.

Can a four-way handshake be used? Why?

Can. But it will reduce the efficiency of transmission.

The four-way handshake refers to: the second handshake: the server only sends ACK and acknowledge number; and the server's SYN and initial sequence number are sent in the third handshake; the third handshake in the original protocol becomes the fourth handshake. For optimization purposes, two and three of the four-way handshake can be merged.


In the third handshake, what happens if the client's ACK is not delivered to the server?

  • Server side:

Since the Server has not received the ACK confirmation, it will resend the previous SYN+ACK (resent five times by default, and then automatically close the connection and enter the CLOSED state). After receiving it, the Client will retransmit the ACK to the Server.

  • Client side, two situations :

During the server's timeout retransmission process, if the client sends data to the server, the ACK in the data header is 1, so the server will read the ACK number after receiving the data and enter the establish state. After the server enters the CLOSED state,
if The client sends data to the server, and the server responds with an RST packet.

What if the connection is established but the client fails?

The server will reset a timer every time it receives a request from the client. The time is usually set to 2 hours. If it has not received any data from the client for two hours, the server will send a detection segment every 75 seconds. Sent every second. If there is still no response after sending 10 probe messages in a row, the server will think that the client is faulty, and then close the connection.

What is the initial serial number?

Party A of the TCP connection randomly selects a 32-bit sequence number (Sequence Number) as the initial sequence number (ISN) for sending data, such as 1000. Using this sequence number as the origin, the data to be transmitted is processed. Number: 1001, 1002... During the three-way handshake, this initial sequence number is transmitted to the other party B, so that B can confirm what data number is legal when transmitting data; at the same time, A can also confirm B when transmitting data. For each byte received, if A receives B's confirmation number (acknowledge number) is 2001, it means that the data numbered 1001-2000 has been successfully accepted by B.


3.1.2 Transmitting data
1. Sliding window
  • The client informs the server of its receive cache size through the TCP header , informs each other of its receive cache size, and allows the other party to set the corresponding send cache size.
  • The server sets its own sending window size and starts sending data. The size of the data sent each time can be inconsistent. When there is unsent data in the window, the data can be sent continuously, but because it has not received confirmation, it cannot be deleted .
  • After the client receives it, it does not receive one confirmation one by one, but sends a cumulative confirmation number. The sending confirmation number = the received packet number + 1. After sending the confirmation number, the client's receiving window slides to the right.
  • The server receives the confirmation number, moves the window to the right, deletes the data packets that have been sent, and reads the new content into the window for sending.
  • In the case of packet loss 7-9, send the relevant confirmation number 7, and add a selective confirmation number SACK to tell 10-12 that it has been received, so that the server knows what packet has been lost and retransmits the lost packet.
2. Congestion control - control of the entire network path

Congestion : The router is too busy. The more data packets it sends, the fewer data packets it receives.

Insert image description here

Congestion control mainly consists of four algorithms: slow start, congestion avoidance, fast retransmission, and fast recovery.

Insert image description here


1. Slow start: exponential increase

Insert image description here

2. Congestion avoidance: linear increase

When network congestion occurs, the slow start threshold is recalculated: half the size of the network congestion window


3. Fast retransmission:
After the client confirms that the packet has been lost, for example, it receives 1 2 4, it will know that 3 has been lost. At this time, it does not wait for the accepted window size 5 to fill up with data, and directly sends three confirmation numbers in succession to let the server resend. Packet loss 3. After receiving the fast retransmission, it enters the fast recovery stage and no longer starts from scratch. Instead, it starts directly from the new slow start threshold.

Fast retransmission requires the receiver to send a duplicate confirmation immediately after receiving an out-of-order message segment (in order to let the sender know early that a message segment has not reached the other party) rather than waiting for the confirmation when sending data. The fast retransmission algorithm stipulates that as long as the sender receives three repeated acknowledgments in a row, it should immediately retransmit the message segments that the other party has not received, without having to wait for the set retransmission timer to expire.

4. Fast recovery :
When the sender receives three consecutive duplicate acknowledgments, the slow start threshold is halved, and then the congestion avoidance algorithm is executed. The reason for not executing the slow start algorithm: Because if the network is congested, it will not receive several duplicate acknowledgments, so the sender thinks that the network may not be congested now.

3.1.3 Wave four times to disconnect

Insert image description here

The first wave: the Client sets FIN to 1 and sends a sequence number seq to the Server; enters the FIN_WAIT_1 state; the
second wave: after the Server receives the FIN, it sends an ACK=1, acknowledgment number=received serial number+ 1; Enter CLOSE_WAIT state. At this time, the client has no data to send, but it can still accept data from the server.
The third wave: Server sets FIN to 1 and sends a sequence number to the Client; enters the LAST_ACK state; the
fourth wave: After the Client receives the server's FIN, it enters the TIME_WAIT state; then sets ACK to 1 and sends an acknowledge number= The serial number + 1 is given to the server; after the server receives it, after confirming the acknowledge number, it changes to the CLOSED state and no longer sends data to the client. The client also enters the CLOSED state after waiting for 2*MSL (maximum life span of the message segment). Complete four waves.

Why can't the ACK and FIN sent by the server be combined into three waves (what is the meaning of the CLOSE_WAIT state)?

Because when the server receives the client's request to disconnect, there may still be some data that has not been sent. At this time, it first responds with ACK, indicating that it has received the disconnect request. Wait until the data is sent before sending FIN to disconnect the data transmission from the server to the client.

What will happen if the server's ACK is not delivered to the client when waving for the second time?

If the client does not receive the ACK confirmation, it will resend the FIN request.

What is the meaning of client TIME_WAIT state?

When waving for the fourth time, the ACK sent by the client to the server may be lost. The TIME_WAIT state is used to resend the ACK message that may be lost. If the Server does not receive the ACK, it will resend the FIN. If the Client receives the FIN within 2*MSL, it will resend the ACK and wait for 2MSL again to prevent the Server from continuously resending the FIN without receiving the ACK.

MSL (Maximum Segment Lifetime) refers to the maximum survival time of a segment in the network. 2MSL is the maximum time required for a send and a reply. If the Client does not receive FIN again until 2MSL, then the Client infers that the ACK has been successfully received and ends the TCP connection.

When will a lot of CLOSE_WAIT status codes appear? Are CLOSE_WAIT status codes normal?

On a TCP server with high concurrency and short connections , the server will actively close the connection normally after processing the request. In this scenario, a large number of sockets will be in the TIME_WAIT state.

3.1.4 What is TCP packet sticking/unpacking problem?

TCP transmits data in a stream, which is an unbounded stream of data and has no message boundaries.

  • When TCP transmits data, the data packets are divided according to the actual situation of the underlying TCP cache :
  • 1. The complete data defined in the business (for example, a complete json string) may be split into multiple data packets by TCP for sending (unpacking)
  • 2. Independent data with special business meaning may also be encapsulated .

Insert image description here

From the figure we can find that there are many situations in which data packets are received:

  1. There is no sticky packet unpacking, and terminal 2 receives complete data packet A and data packet B.
  2. Terminal 2 reads data packet A and data packet B at one time, which is a sticky packet .
  3. Terminal 2 reads the complete data packet A and part of the data packet B1, and only reads the remaining part of data packet B (data packet B2) for the second time. This is unpacking.
  4. Similar to the third point, data packet A may also be divided into two parts (A1, A2), which are read back and forth.
  5. Assuming that the data packet is large, multiple unpacking may occur . For example, data packet A is read N times.

TCP sticky/unpacking solution strategy

Since TCP cannot understand the characteristics of the business data of the upper layer, TCP cannot guarantee that the data packets sent will not be stuck or unpacked. This problem can only be solved through the design of the upper layer protocol stack . There are several solutions:

  • Message length is fixed . The size of each data packet sent is fixed, such as 100 bytes. If it is less than 100 bytes, add spaces. The recipient reads the data according to this length when fetching the data.
  • Add a newline character to the end of the message to represent a complete message. When the receiver reads it, it determines whether it is a complete message based on the newline character. This approach is inappropriate if the content of the message also contains newline characters.
  • The message is divided into two parts: message header and message tail . The message header specifies the data length , and the complete message is read according to the message length. For example, the UDP protocol is designed in such a way that two bytes are used to represent the message length, so UDP does not have problems with packet sticking and unpacking.
3.1.5 How does TCP ensure that it is error-free? What control algorithms are there?

TCP passes header inspection, out-of-order reordering, discard duplication, sequence number and acknowledgment response mechanism, timeout retransmission, flow control, and congestion control.

1. Use the sequence number to identify whether data has been received and ensure the order, discarding duplicate data packets to achieve reliable transmission . Therefore, even if packet loss or duplication occurs, TCP can still achieve reliable transmission.

2. Serial number and confirmation response signal . When the data from the sending end reaches the receiving host, the receiving end host will return a notification that the message has been received. This message is called an acknowledgment response (ACK) = received sequence number + 1. If the sending end does not receive a confirmation response within a certain period of time, ACK, the sender will think that the data is lost and retransmit it after a timeout .

3. Timeout retransmission : When TCP sends a segment, it starts a timer and waits for the destination to confirm receipt of the segment. If an acknowledgment cannot be received in time, the segment will be resent.

4. Flow control: Use sliding windows to achieve flow control. The receiver controls the sender's sending speed by notifying the sender of its own window size . Early stage: The sender sends a packet 1, and the receiver confirms packet 1 at this time. Send package 2, confirm package 2. Throughput is very low. In order to increase the throughput, several packets are sent in a row and waited for confirmation. So how to achieve the optimal solution of sending several packets in a row? It is the sliding window algorithm.

3.2 The difference between TCP and UDP

  1. **TCP is connection-oriented, and UDP is connectionless; **What is connectionless? UDP does not require establishing a connection before sending data
  2. TCP is reliable, UDP is unreliable ; what is unreliable? After receiving the message, the UDP receiver does not need to give any confirmation.
  3. TCP only supports point-to-point communication, and UDP supports one-to-one, one-to-many, many-to-one, and many-to-many ;
  4. TCP is byte stream oriented, UDP is message oriented ;
    byte stream oriented means sending data in bytes. A data packet can be split into several groups for sending, while UDP can only send a message once. Finished.
  5. TCP has a congestion control mechanism , but UDP does not. Network congestion will not reduce the sending rate of the source host, which is important for certain real-time applications, such as media communications and games;
  6. TCP header overhead (20 bytes) is larger than UDP header overhead (8 bytes)
  7. UDP hosts do not need to maintain complex connection state tables

For some situations with high real-time requirements, choose UDP, such as games, media communications, and real-time video streaming (live broadcast), which can be tolerated even if transmission errors occur; in most other cases, HTTP uses TCP because it requires
transmission The content is reliable and will not be lost. For example, HTTP cannot use UDP. HTTP needs to be based on a reliable transmission protocol, while UDP is unreliable.

3.2.1 What protocol is based on UDP transmission?

Commonly used UDP protocol ports are:

(1) RIP: Routing Information Protocol (RIP) is a standard for exchanging routing information between gateways and hosts

(2) DNS: used for domain name resolution services, this service is most commonly used in Windows NT systems. Every computer on the Internet has a corresponding network address. This address is often referred to as an IP address, which is expressed in the form of pure numbers + "." However, this is inconvenient to remember, so the domain name appeared. When accessing the computer, you only need to know the domain name. The conversion between the domain name and the IP address is completed by the DNS server. DNS uses port 53.
(3) SNMP: Simple Network Management Protocol, using port 161, is used to manage network devices. Since there are many network devices, connectionless services have their advantages.
(4) OICQ: The OICQ program not only accepts services, but also provides services, so that the two chatting talents are equal. OICQ uses a connectionless protocol, which means it uses the UDP protocol. The OICQ server uses port 8-000 to listen for incoming information, and the client uses port 4000 to send information out. If the above two ports are in use (many people are chatting with several friends at the same time), add them in order.

4 network layer

Insert image description here

4.1 IPV4 (32-bit) and IPV6 (128-bit)

Insert image description here

Insert image description here

4.2 ABC three-category address

The IP address contains the network address + the host address, that isIP address = network address + host address

Network address: The network number on the same network is the same, also called subnetting

In IP address:

  • All 0 means yeshost address, which is the address of all hosts in this network;
  • All 1 means yesbroadcast address, which is the address of the broadcast sent to this network,Replace the host in the network address with all 1s

How many hosts can be placed in a Class C address at most? Which addresses are unavailable and what they are used for.

Insert image description here

The subnet mask for a Class A IP address is 255.0.0.0

The subnet mask for a Class B IP address is 255.255.0.0

The subnet mask for a Class C IP address is 255.255.255.0

The subnet mask has only one function, which is to divide an IP address into two parts: a network address and a host address.

Through the subnet mask, you can determine whether two IPs are within the same LAN.

The subnet mask can tell how many digits are the network number and how many digits are the host number.

Insert image description here

It is known that an IP address is 192.168.1.1 and the subnet mask is 255.255.255.0. What is its network address?

[The external link image transfer failed. The source site may have an anti-leeching mechanism. It is recommended to save the image and upload it directly (img-paLWli6N-1631955648977) (03_Computer Network.assets/image-20210918161242684.png)]

It is known that the IP address of a certain host is 192.168.100.200 and the subnet mask is 255.255.255.192. How many IP addresses are available in its network?

Insert image description here

The subnet mask of a Class A IP address is 255.255.240.0. How many subnets are used to divide it? ****And how many subnets can be divided? What is the number of IP addresses per subnet?
Insert image description here

What is the broadcast address with IP address: 10.135.255.19 and subnet mask 255.255.255.248?

Insert image description here

4.3 What is ARP protocol: Address Resolution Protocol (broadcast mechanism)

The ARP protocol completes the mapping between IP addresses and physical addresses . Each host is equipped with an ARP cache, which contains a mapping table from the IP address of each host and router on the local area network to the hardware address.

When the source host wants to send a data packet to the destination host, it will first check whether there is a MAC address of the destination host in its ARP cache. If so, it will directly send the data packet to this MAC address.

If not, initiate an ARP request broadcast packet to the local area network : APR request: I am 209.0.0.5, the hardware address is 00-00-C0-15-AD-18, I want to know the hardware address of the host 209.0.0.6 , the host that receives the request checks whether its own IP address is consistent with the IP address of the destination host. If they are consistent, it first saves the mapping of the source host to its own ARP cache, and then sends an ARP response packet to the source host. APR response: I am 209.0.0.6 and the hardware address is 08-00-2B-00-EE-0A

After the source host receives the response packet, it first adds the mapping between the IP address and the MAC address of the destination host, and then transmits the data. If the source host never receives a response, it means that the ARP query failed.

If the host you are looking for and the source host are not on the same LAN, then you need to use ARP to find the hardware address of a router on the local area network, and then send the packet to the router and let the router forward the packet to the next router. network. The next network does the rest.

Guess you like

Origin blog.csdn.net/qq_42647903/article/details/116420686