TCP Sliding Window Protocol and Flow Control

        When it comes to TCP's sliding window protocol and flow control, you will think of the scene during the interview. At that time, I had just graduated, and I learned a little about TCP in a hurry. I just knew that TCP is a connection-oriented protocol, and it uses the mechanism of confirmation + timeout retransmission for each message to ensure end-to-end reliable transmission; and in the interview After memorizing the data structure of the TCP protocol header, I feel confident that I have mastered all the essence of TCP and become an out-and-out TCP expert.

        As a result, the leader only asked me two questions during the interview: 1. The process of TCP sliding window protocol and flow control. 2. Please describe the principle of TCP slow start in detail. These two questions brought me back to my original shape on the spot. These two questions have always been in my memory, especially fresh.

        Recently, I often encountered network connection problems in the new laboratory, so I spent some time preparing a series of TCP/IP related training for my colleagues in the department. Afterwards, a colleague gave me feedback: TCP sliding window protocol and flow control, and TCP slow start are not very easy to understand. They asked me if I could write a simple summary and write it into an article for future reference, so I have this article. .

       This article assumes that readers have the basic knowledge of TCP, understand and master: IP protocol, TCP is a reliable connection-oriented transport protocol, TCP will retransmit, network delay RTT, network delay jitter, network packet loss rate, network bandwidth.

       Readers who have understood the sliding window protocol can go directly to:

1. What is TCP flow control

        Everyone already knows that TCP is a reliable connection-oriented transmission protocol. As you can see, the TCP protocol can ensure that each data packet is correctly transmitted from the sender to the receiver. The most important means it relies on are retransmission and confirmation. Response mechanism, but if you only rely on retransmission and confirmation response mechanism, it will be wrong in some scenarios (or most of them). We also need flow control to ensure the data flow between the sender and the receiver. Only the speed can make TCP run stably, which is to control the sending speed of the data flow between the sender and the receiver is flow control.

2. Under what scenarios should flow control be performed?

        Considering the different capabilities of the sender and receiver, there are the following three scenarios:

1. The sender sends slowly, and the receiver receives quickly ----------------------------- No flow control is required

2. The sending speed of the sender is as fast as the receiving speed of the receiver -------------- No flow control is required

3. The sender sends fast, and the receiver receives slowly ----------------------------- Flow control is required

So only the third scenario needs to do flow control , let's think about it:

        If the sender sends data too fast and the receiver has no time to receive it, then when the receiver's buffer is full, packets will be lost.

        After the packet message is lost, the receiver will not send an acknowledgment message back to the sender, which will further cause the sender to resend. The more messages the sender resends, the more time the receiver will have to process the lost message. The more messages will be lost, the more messages will be lost, which will cause the sender to send more resent messages...

        This vicious circle eventually leads to the continuous deterioration of the network status until the TCP connection is disconnected or the network crashes. Then, in order to avoid packet loss and continuous deterioration of the network, we need to control the sending speed of the sender in such a situation so that the receiver can receive it in time, which is flow control .

       The fundamental purpose of flow control is to prevent packet loss and avoid unnecessary retransmission of packets, which is a very important aspect of TCP reliability.

Third, how to achieve flow control

       In order to achieve flow control, the TCP protocol introduces a sliding window protocol (continuous ARQ protocol). The sliding window protocol not only ensures the error-free and orderly reception of packets, but also realizes flow control. It is very simple to sum up in one sentence: the ACK returned by the receiver will contain the size of its own receiving window, and the sender will check the size of this window to control its own data sending speed.

Fourth, the specific principle of flow control

       First of all, let's take a look at how to ensure the reliability of data transmission under the TCP response mechanism, that is, how to ensure that the message is correctly sent from the sender to the receiver.

A. Introduce the stop-wait protocol and retransmission mechanism to ensure the reliability of transmission

The data sender needs to wait for an acknowledgment of the sent data before sending the next data block, as shown in the following figure:

        The advantage of this protocol is that each message can get timely feedback, and it is generally only used for protocols that require immediate response and have strict sequence requirements for the sending sequence of packets.

        The biggest problem with this method is that the network utilization rate is extremely low . Take the above figure as an example, assuming that according to the processing speed of host A and the bandwidth of the network, it takes t milliseconds to send a packet, then the original time interval of t4-t1 is at most It is possible to send (t4-t1)/t packet messages, but because of the reason of stopping and waiting, only one packet message is sent in the time interval t4-t1, and the time of t4-t1-t is wasted waiting there , so the utilization rate of the network bandwidth is only t/(t4-t1)*100%. Part of the time is idle there waiting for group acknowledgment information.

       Next, let's take a look at TCP as a data transmission protocol, how to improve the stop and wait protocol to improve network utilization and throughput

B. Improve network utilization, allowing the sender to send multiple packets continuously before receiving the first packet and waiting for confirmation

        The idea is actually very simple. Since there is t4-t1 free time, we allow the sender to continue sending multiple packets before receiving the "block acknowledgment" , so that the time and network bandwidth during the empty stop waiting period can be utilized. As shown in the figure, 100 packets are sent within the time period of t4-t1. We continue to assume that it takes t milliseconds to send a packet, then the utilization rate of the network will increase to 100t/(t4-t1)*100%, which is the original 100 times.

        At the same time, if each group sends back a block acknowledgment (ACK), there will be a large number of block acknowledgment (ACK) packets on the network, and the ACK packets will occupy a large amount of bandwidth, so an improvement is made here to reduce the number of ACK packets . number of texts . The specific method is to put a number in the ACK message, indicating that all packets smaller than this number have been received correctly. In the figure "ACK (the next packet is 1001)" indicates that all packets before 1001 have been received correctly, then 999 ACK messages are reduced.

        Consider the relationship between t4-t1 and t. In reality, due to the different sending speeds of hosts and different network delays, the numerical relationship between t4-t1 and t may appear "100t>t4-t1". In this case Next, obviously the sender cannot connect and send 100 messages. Then the question arises, how do we define how many packets the sender can send continuously before receiving the group confirmation packet ack ? This requires introducing the concept of a sliding window at the sending end, and according to the size of the current window to determine how many packets the sending end can send at most before receiving a new ACK.

C, a sliding window is introduced in the ideal network to control the sending speed of the sender

        In order to simplify the understanding, we first think that the network is an ideal network, no jitter, no packet loss, and no out of order (ie: data packets always arrive at the receiver in the order they were sent). We already know that in the understanding network just defined, we need to introduce a sliding window protocol to control the sending speed only when the sending speed of the sender is greater than the receiving speed of the receiver.

        In this case, the receiver controls the sending window size of the sender through ACK, so as to achieve the purpose of controlling the sending and sending speed. Let's first look at several definitions of sliding windows.

       We define a serial number for each packet, as shown in the figure below 1, 2, 3, 4.... (The serial number table in TCP is the number of bytes, and 4 means the 4nth byte); the box in the figure ( Frame 1-6) is the size of the largest sending window (that is, the window provided ) that the receiver notifies the sender to the sender, indicating that the sender can send up to 6 that have been sent but have not received confirmation The packet (bytes in TCP), at this time, the available window size and the size of the maximum sending window are consistent. Those outside the window (>7) are defined as not being able to send

        When the sender starts to send 6 packets (6n bytes in TCP), 1-3 packets (1n-3n bytes in TCP) have been sent and received confirmation , 4-6 packets (6n bytes in TCP) 4n-6n bytes) have been sent but no confirmation has been received , the window size provided at this time remains unchanged at 6, but slides to the position of 4-9, at this time 4-6 is no longer available, and the available window is reduced It is 7-9, which indicates that this 7-9 can still be sent, and if the sending end has a sending request, it can continue to use this 7-9 to send. If the value greater than or equal to 10 falls outside the window, even sent until the provided window is moved.

        Therefore, the receiving end controls the sliding of the window by sending back the ACK to the sending end, and then controls the sending speed of the receiving end by controlling the remaining number of packets that can be sent by the sending end. This becomes the sliding window protocol in the ideal network defined above. Complete send rate control.

        The above picture can be divided into 4 parts from left to right: sent and confirmed, sent but not confirmed, enough to send, not able to send .

Sent but not acknowledged + able to send, these two parts together constitute the total size of the provided send window .

D, Improvement of sliding window protocol in real network

        In a real network, there will always be two situations of packet loss and out-of-sequence . Then the lost packet will never arrive and must be resent by the sender, and out of order means that the packet sent later may arrive first, and the receiver needs to save the packet sent later and wait for the packet sent first The message arrives.

        TCP is a connection-oriented protocol that guarantees the order, so TCP must ensure that the receiver's packet messages are reassembled correctly in the order they were sent before providing them to the upper-layer application.

        Therefore, the sliding of the window will be affected by packet loss and disorder, and three pointers are added (the starting point of the sent packet that has not received confirmation, the next packet number that can be sent, and the starting point of the packet message waiting to be received) , and divide the sent but unacknowledged packets in C into two parts: the unacknowledged packets that fall behind the starting point, and the acknowledged packets.

        As shown in the figure above, the sender can clearly know that 4 and 5 have not been confirmed, but 6 and 7 have been confirmed, then the sender can clearly know that 4 and 5 should be resent instead of 6 and 7.

        The above is the design of the conventional sliding window protocol.

        In TCP, considering retransmission, and TCP is numbering each byte, and ACK always replies to the confirmation sequence number (that is, all bytes before the confirmation sequence number have been correctly received), instead of numbering each packet And generate ACK for each packet. Therefore, the receiving end of TCP always replies "the next byte sequence number expected to be received". Therefore, as shown in the figure above, when receiving 6 and 7, both reply the serial number of the first byte of packet 4, indicating that packet 4 has not been received, and expect to receive data starting from the serial number of the first byte in packet 4. Then the sender knows to resend 4. The same is true for 5, so when 4 and 5 are retransmitted and received correctly, the receiving end will directly send the sequence number of the first byte of 7+1=8 because it has received 6 and 7 correctly , the sender will no longer send 6 and 7.

        Note that usually what we call a TCP sliding window consists of a sending window swnd and a receiving window rwnd, and because TCP is a full-duplex communication, the hosts on both sides are both senders and receivers at the same time, so the hosts on both sides maintain It has its own sliding window (sending window swnd and receiving window rwnd). But in a normal LAN, the maximum bandwidth is fixed, there is almost no jitter, there is almost no packet loss due to network reasons, and there is almost no disorder.

        In such an ideal network, without considering algorithms such as congestion control, the sliding window sending window swnd and receiving window rwnd will degenerate into exactly the same window. This article is based on such an ideal network, plus only considering the model where one side is fixed as the sender and the other side is fixed as the receiver.

        The TCP protocol is based on the above-mentioned sliding window protocol, through four specific algorithms: slow start, congestion avoidance, fast retransmission, fast recovery , to achieve the goal of increasing the throughput of TCP as much as possible. We can introduce it in the TCP congestion control algorithm: slow start algorithm, congestion avoidance algorithm, fast retransmission and timeout retransmission algorithm, and fast recovery algorithm .

Guess you like

Origin blog.csdn.net/meihualing/article/details/112687263#comments_26126109