[Study Notes] Detailed Explanation of TCP/IP Protocol

1. How many network numbers are there for A, B, and C?

Class A network numbers: There are 2^7 - 2 in total, that is, 126. This is because the first byte of the class A network number ranges from 1.0.0.0 to 126.0.0.0, of which 0.0.0.0 and 127.0.0.0 are special reserved addresses and cannot be used for network division.

Class B network numbers: There are 2^14 in total, that is, 16,384. The first octet of a class B network number ranges from 128.0.0.0 to 191.255.0.0.

Class C network numbers: There are 2^21 in total, that is, 2,097,152. The first byte of a Class C network number ranges from 192.0.0.0 to 223.255.255.0.

The calculation process is as follows:

  • For a class A network number, the first byte ranges from 1-126, with a total of 126 possible values.
  • For class B network numbers, the first byte ranges from 128-191, with a total of 64 possible values. The second byte ranges from 0-255, for a total of 256 possible values. The total number of combinations is 64 * 256 = 16,384.
  • For Class C network numbers, the first byte ranges from 192-223, with a total of 32 possible values. Both the second and third bytes range from 0-255, and each byte has 256 possible values. The total number of combinations is 32 * 256 * 256 = 2,097,152.

2. If your system supports the netstat command, use it to determine the interfaces on your system and their MTU.

The netstat command is used to view network connections, routing tables, and network interface information

netstat -inView interface name, MTU, network traffic statistics, etc.

Also available ifconfig -a(Linux system) to view

3. Identify the three advantages offered by the datagram network layer IP layer

  • Network interconnection, the IP layer provides network interconnection, allowing communication between different networks and subnets
  • Network routing, the packet is sent from the source host to the destination host through the routing selection algorithm, and the best path is selected through the routing table. Realize packet forwarding and routing functions
  • Connectionless communication: The IP layer is a connectionless protocol, each packet is processed independently, and there is no connection establishment overhead

4. Why are there two types of ICMP redirect packets—network and host?

  1. Network Redirect message: The Network Redirect message is used to instruct the source host to change its default route when sending a data packet so that the data packet is sent through a more direct path. This redirection is usually generated by routers to inform the source host that a better path exists. The network redirect message is aimed at the entire network , and the sender provides a better IP address of the next-hop router.
  2. Host Redirect message: The host redirect message is used to instruct the source host to change its next-hop host when sending data packets , so as to send data packets through a closer host. This redirection is usually generated by the target host to inform the source host that there is a next-hop host closer to the target . The host redirection message is aimed at a specific target host, and the sender provides an IP address of a better next-hop host.

The purpose of distinguishing between network redirection and host redirection is to enable the source host to make appropriate routing adjustments according to different situations. Optimizing path selection across the network

5. There is a checksum field in the OSPF packet format, but there is no such field in the RIP packet. Why?

  • RIP runs over UDP, which provides an optional checksum of the data portion of UDP datagrams (Section 11.3).
  • OSPF runs on IP, and the IP checksum only covers the IP header, so OSPF must add its own checksum field.

6. Assuming there is an Ethernet and a UDP datagram of 8192 bytes, how many datagram fragments need to be divided into, and what is the offset and length of each datagram fragment?

The maximum length of a UDP datagram is 65535 bytes (including UDP header and data part). However, the MTU of Ethernet is usually 1500 bytes, so UDP datagrams need to be fragmented when they exceed the MTU limit.

The length of the data part of each IP datagram segment (including UDP datagram segment) needs to be an integer multiple of 8 bytes

  • Number of datagram pieces required = ceil(UDP datagram length / MTU) = ceil(8192 / 1500) ≈ 6
  • The length of the data part of the datagram is: MTU - IP header length = 1500 - 20 (assuming the IP header length is 20 bytes) = 1480 bytes
  • Offset calculation: Offset of subsequent datagram fragments = (index of datagram fragments - 1) * length of data part of each datagram fragment / 8 = (index - 1) * 1480 / 8

7. TCP provides a byte stream service, and neither the sender nor the sender maintains a record boundary. How can applications provide their own record IDs?

In this case, the application needs to determine and manage the boundaries of the records itself, and provide each record with its own identity. This can be achieved by defining a specific record format in the application layer protocol or by using special tags.

Common ways to provide the application's own record identity:

  • Fixed-length record: the program agrees on the inherent length of each record
  • Delimiter mark: The program agrees that a specific delimiter is used as a mark between records, and the receiver receives the delimiter and considers it a complete record
  • Length prefix: the program adds a leading axis indicating the length of the data to each record

8. TCP's three-way handshake and four-way wave

insert image description here

Why wait for 2MSL (maximum message life cycle) before closing the connection?

After the client sends the ACK, it needs to ensure that the server has received it and close the connection, and the maximum survival time of the message in the network is 2MSL. It is to ensure that there are no old message segments about this connection in the network to ensure the reliability of the TCP connection.

9. What is the difference between a TCP half-open connection and a half-close connection?

  1. Half-Open Connection (Half-Open Connection): A half-open connection refers to the connection state at a certain stage in the TCP three-way handshake process. At this stage, one party has sent a SYN segment and is waiting for the other party's confirmation, but has not yet received the other party's confirmation. This status usually indicates that the connection is being established or there is a problem. A half-open connection can occur in the event of a network failure, network delay, or unresponsive host.
  2. Half-Closed Connection (Half-Closed Connection): A half-closed connection refers to the connection state at a certain stage during the TCP four-way wave. When one party sends a FIN segment to indicate that the connection is closed, the other party can still send and receive data, but no new data will be sent. This state allows one party to close the sending data channel first, but can still receive the data sent by the other party. Half-closed connections are usually used when one party needs to terminate sending data, but still needs to receive the data sent by the other party. For example, the server sends a connection termination request to the client, but allows the client to continue sending data.

In summary, a half-open connection is a temporary state during connection establishment, indicating that the connection is being established or there is a problem. Half-closed connection is a state in the process of closing the connection, which allows one party to close the sending data channel but still receive the data sent by the other party. These two connection states occur at different stages and serve different purposes.

TCP

  1. What is the difference between TCP and UDP? What different application scenarios are they suitable for?

  2. What is the process of TCP three-way handshake? Why is a three-way handshake required to establish a connection?

  3. What is the process of TCP four-way wave? Why is it necessary to wave four times to close the connection?

4. What are TCP flow control and congestion control? What is their difference and purpose?

  • TCP's flow control is used to control the rate at which the sender sends data to prevent the receiver from being too late to process or overflowing. By using a sliding window mechanism, the receiver informs the sender of the amount of data it can receive.
  • The congestion control of TCP is used to control the congestion in the network and prevent the network performance from degrading due to excessive data injection into the network. TCP adjusts the sender's sending rate by using congestion windows and congestion avoidance algorithms such as congestion avoidance, slow start, fast retransmission, and fast recovery

5. What is the window size of TCP? How can I adjust the window size to optimize transfer performance?

  • TCP's window size refers to the amount of unacknowledged data that a sender can send before waiting for an acknowledgment. It determines how much data the sender can keep sending.
  • TCP uses the congestion window and the receive window to determine the window size. The congestion window is affected by the degree of network congestion, while the receive window is affected by the buffer size available at the receiver.
  • Through congestion control algorithm and dynamic adjustment of window size, TCP can optimize transmission performance, improve bandwidth utilization and reduce congestion.

6. How is the reliability of TCP realized? What mechanisms does it use to ensure reliable transmission of data?

  • TCP uses several mechanisms to achieve reliable transmission. It includes serial number and confirmation mechanism . The sender sends data with serial number, and the receiver confirms the received data by sending a confirmation message segment.
  • TCP also uses a timeout retransmission mechanism to deal with lost data . If the sender does not receive an acknowledgment within a certain period of time, the data will be retransmitted.
  • There are other mechanisms, such as sliding window protocols, selective acknowledgments, and cumulative acknowledgments, etc., that are used to ensure reliable transmission of data.

7. What is the timeout retransmission mechanism of TCP? How does TCP handle when packet loss or delay occurs?

  • TCP's timeout retransmission mechanism is to deal with packet loss, delay or out-of-sequence in the network. When the sender sends data, it will start a timer. If no acknowledgment is received from the receiver before the timer expires, the sender assumes the data was lost and resends the data. The retransmission time is dynamically adjusted according to the network delay and congestion level using the RTO dynamic algorithm.
  • When packet loss occurs, the receiver ignores the duplicate data and sends an acknowledgment to the sender indicating the expected next data sequence number.

8. What are the congestion control algorithms of TCP? Introduce their principles and application scenarios respectively

  • The congestion control algorithm of TCP mainly includes slow start, congestion avoidance, fast retransmission and fast recovery.
  • The Slow Start algorithm is used in the initial stage of the connection, and the sender gradually increases the size of the sending window in order to establish an appropriate load in the network.
  • The congestion avoidance (Congestion Avoidance) algorithm starts after the end of the slow start phase, and the sender gradually increases the size of the sending window according to the degree of network congestion to avoid excessive congestion.
  • The Fast Retransmit algorithm is used to quickly detect and process lost data segments. When the receiver receives out-of-order segments, it immediately sends a duplicate acknowledgment of the last correctly received segment to trigger a fast retransmission from the sender.
  • Fast Recovery (Fast Recovery) algorithm is used to restore the congestion state. When the sender receives three duplicate acknowledgments, it reduces the send window size and enters fast recovery

9. How does TCP's retransmission mechanism work? How does it detect and handle lost segments?

  1. Sender sends data segments: The sender divides the data into smaller segments and assigns each segment a sequence number. The sender starts timing, starts a timer, and waits for the receiver's acknowledgment.
  2. The receiver receives the data segment: After receiving the message segment, the receiver caches it and sends a confirmation message segment to the sender, where the confirmation number indicates the serial number of the last consecutive byte that has been successfully received.
  3. Reception confirmation by the sender: After receiving the confirmation, the sender compares the confirmation number with the sent message segment to confirm that the receiver has correctly received the message segment.
  4. Timer expires: If the sender does not receive an acknowledgment before the timer expires, it will assume the segment was lost. The sender will resend the lost segment and restart the timer to wait for the next confirmation.
  5. Limit on the number of retransmissions: If the sender does not receive confirmation for many times in a row, it will assume that there is a serious problem in the network and take corresponding measures, such as reducing the size of the sending window, starting a congestion control mechanism, etc.
  6. The receiver detects the missing segment: the receiver indicates the expected sequence number of the next consecutive byte by acknowledging the acknowledgment number in the segment. If the receiver does not receive data segments with continuous sequence numbers within a certain period of time, it will repeatedly send the acknowledgment of the last correct reception to instruct the sender to retransmit the lost segments.

10. What is the sliding window protocol of TCP? How does it achieve reliable data transmission and flow control?

The sliding window protocol works as follows:

  1. Send window: The sender maintains a send window, which indicates the range of data segments that can be sent but have not received an acknowledgment. The sending window is determined by two parameters: the starting position of the sending window and the size of the window.
  2. Receive window: The receiver maintains a receive window, which represents the range of data segments it can receive. The receiving window is determined by two parameters: the starting position of the receiving window and the size of the window.
  3. Operation of the sender: The sender determines the number of data segments that can be sent according to the acknowledgment information sent by the receiver and the window size. The starting position of the sending window of the sender slides forward continuously with the confirmation of data to allow new data segments to be sent.
  4. Receiver's Action: The receiver determines which segment to acknowledge based on the received segment and the sender's expected sequence number. The starting position of the receiver's receiving window slides forward continuously as successive data segments are received to allow new data segments to be received.

Through the sliding window protocol, TCP realizes reliable data transmission and flow control functions:

  1. Reliable data transmission: After the sender sends the data segment, it waits for the receiver's confirmation. If the sender does not receive an acknowledgment within a certain amount of time, it retransmits the missing data segment. The receiver confirms the received data segment according to the sequence number, and informs the sender through the confirmation message.
  2. Flow control: The sender controls the amount of data sent according to the receiver's window size. The sender will not send more data than the receiver window size to avoid receiver buffer overflow. The receiver can adjust the window size according to processing power and buffer size.

11. What are the sticky and unpacking problems of TCP? How to deal with these problems?

Sticky packet problem: The sticky packet problem refers to the fact that multiple data packets sent by the sender are combined into one large data packet when the receiver receives it. This may prevent the receiver from properly parsing and processing the data.

Unpacking problem: The unpacking problem refers to that a data packet sent by the sender is split into multiple small data packets when the receiver receives it. This may result in the receiver being unable to fully obtain the data originally sent by the sender

Solution:

  • fixed message length
  • special character delimited
  • Message header identifier
  • Application Layer Protocol Design

12. How to view the TCP connection status and related information through the netstat command?

  1. netstat -a: Display all connections and listening ports, including TCP and UDP.
  2. netstat -t: Display all TCP connections.
  3. netstat -u: Display all UDP connections.
  4. netstat -n: Display the address and port number in digital form, without analyzing the host and port.
  5. netstat -p: Displays the program name or process ID associated with the connection.
  6. netstat -s: Display the statistics of TCP and UDP, such as the number of data packets received and sent, errors, etc.
  7. netstat -r: Display routing table information.
  8. netstat -l: Display the port that is listening.

Combined use, netstat -atndisplay port numbers and addresses of all TCP connections, process IP

Guess you like

Origin blog.csdn.net/qq_36624086/article/details/130791228