TCP reliable transmission & flow control & congestion control

TCP reliable transmission

How TCP reliable transport works

1. Error free

1>: No error: A sends the packet M1, and then suspends the transmission, waiting for the confirmation of B, B receives the confirmation of M1 and sends the confirmation to A, and A receives the confirmation of M1 and then sends the next packet. (If A receives confirmation information of consecutive M1 packets, it proves that M2 is missing)
2>: An error occurs:
        As long as A still does not receive confirmation for a period of time, it considers that the packet just sent is lost, and retransmits the previously sent packet. The packet is called timeout retransmission. Implemented by a retransmission timer.
        and:

                (1) A must temporarily retain a copy of the packet every time A sends a packet;

                (2) The grouping and confirmation grouping must be numbered'

                (3) The retransmission time of the timeout timer should be more than the average round-trip time of the data in the packet.

3>: If B receives the duplicate packet M1, it does not deliver it to the upper layer; moreover, it sends an acknowledgement to A.

2. Timeout retransmission

        The principle is to start a timer after sending a certain data. If the ACK message of the sent datagram is not obtained within a certain period of time, the data will be resent until the transmission is successful. The timeout time set after the first transmission is actually 1.5 seconds, after which the time is doubled for each retransmission until 64 seconds, using an exponential backoff algorithm. A total of 12 retransmissions , about 9 minutes before giving up retransmission, the time is immutable in the current TCP implementation, Solaris2.2 allows administrators to change this time, tcp_ip_abort_interval variable. And its default value is two minutes instead of the most commonly used nine minutes.

 

3. Stop Waiting for Agreement

   The advantage is simplicity, but the channel utilization is too low. The solution is to use the continuous ARQ protocol. The sender maintains the sending window and sends several packets continuously each time. The receiver uses cumulative acknowledgments to send acknowledgments to the last packet that arrives in sequence.

   The disadvantage is that all packet information that the receiver has received correctly cannot be reflected to the sender, such as loss of intermediate packets.

 

4. Implementation of TCP reliable transmission

Each end of a TCP connection must have two windows - a send window and a receive window . The reliable transport mechanism of TCP is controlled by the sequence number of the bytes. All TCP acknowledgments are based on sequence numbers rather than segments.
The sent data must be retained until it receives an acknowledgment, so that it can be used for retransmission over time. The sending window does not move (no acknowledgment is received) and moves forward (a new acknowledgment is received). The
send buffer is used to temporarily store: the data that the sending application transmits to the sender TCP is ready to send; the data that TCP has sent but has not yet received an acknowledgment . The receive buffer is used to temporarily store: data that arrives in sequence but has not been read by the receiving application; data that arrives out of sequence.
Three points must be emphasized:
    1> A's send window is not always as large as B's receive window (because of a certain time lag).

    2> The TCP standard does not specify what to do with data that arrives out of sequence. Usually, it is temporarily stored in the receiving window first, and after the missing bytes in the byte stream are received, they are delivered to the upper-layer application process in order.
    3> TCP requires that the receiver must have the function of accumulating confirmation, which can reduce the transmission overhead

5. Sliding window diagram

   

    

 

 

The difference between TCP congestion control and flow control

     Congestion control is to prevent too much data from being injected into the network, so that the routers or links in the network are not overloaded. Congestion control has to do with a premise that the network can bear the existing network load.

       Flow control often refers to the control of point-to-point traffic, which is an end-to-end problem. All flow control has to do is control the rate at which the sender sends data so that the receiver has time to accept it.

 

TCP flow control

     The so-called flow control is to make the sender's sending rate not too fast, so that the receiver has time to accept it. Using the sliding window mechanism, it is very convenient to implement flow control on the sender on the TCP connection.

      The window unit of TCP is bytes, not segments. The sender's sending window cannot exceed the value of the receiving window given by the receiver.

      As long as one party of the TCP connection receives the zero-window notification from the other party, it will start the continuous timer. If the time set by the continuous timer expires, it will send a zero-window detection segment (carrying only 1 byte of data), and the other party will The current window value is given just when the probe segment is acknowledged.

 

TCP flow control principle

   The so-called flow control is to make the sending rate not too fast, so that the receiver has time to receive. Flow control can be implemented using the sliding window mechanism.

         The principle is to use the window size field in the TCP segment to control . The sender's sending window cannot be larger than the window size returned by the receiver.

         Consider a special case, that is, if the receiver does not have enough buffers to use, it will send a message with a zero window size. At this time, the sender will set the sending window to 0 and stop sending data. After that, the receiver has enough buffers to send a message with a non-zero window size, but if the message is lost in the middle, the sender's sending window will always be zero, resulting in a deadlock.

         To solve this problem, TCP sets a persistence timer ( persistence timer ) for each connection . As long as the TCP party receives the zero-window notification from the other party , it starts the timer and periodically sends a zero-window detection segment. The other party gives the current window size when confirming the message (Note: TCP stipulates that even if it is set to zero window, it must receive the following segments: zero window detection segment, confirmation segment and carry segment for urgent data).

 

TCP congestion control

 During a certain period of time, if the demand for a certain resource in the network exceeds the available part provided by the resource, the performance of the network will change, which is called congestion.

Congestion Control Design

       Congestion control is difficult to design because it is a dynamic problem, and in many cases, even the formal congestion control mechanism itself becomes the cause of network performance degradation or even deadlock. From the perspective of control theory, congestion control can be divided into two methods: open-loop control and closed-loop control. Open-loop control is to consider all the factors related to the occurrence of congestion in advance when designing the network, and once the system is running, it cannot be corrected in the middle.

     Closed-loop control is based on the concept of a feedback loop and includes the following measures:

     1 ) Monitor the network system to detect when and where congestion occurs

     2 ) Send information about congestion to where action can be taken

     3 ) Adjust the actions of the network system to solve the problems that arise.

 

Congestion Control Method

The Internet recommendation standard RFC2581 defines four algorithms for congestion control, namely Slow - start , Congestion Avoidance , Fast Restrangsmit and Fast Recovery . we assume

     1 ) Data is sent in one direction, while the other direction only sends acknowledgments.

     2 ) The receiver always has a large enough buffer space, because the size of the sending window is determined by the degree of network congestion.

 

The figure below is a visual depiction of slow start and congestion avoidance. We show cwnd and ssthresh in segments , but they are actually maintained in bytes.

        The TCP Tahoe version is deprecated.

      Congestion Window Concept

       The rate of sending a segment is determined not only according to the receiving capability of the receiver, but also from a global consideration not to cause network congestion, which is determined by two state quantities, the receiving window and the congestion window. Reciver Window, also known as Advertised Window, is the latest window value promised by the receiver according to the current receiving buffer size, and is flow control from the receiver. The congestion window cwnd ( Congestion Window) is a window value set by the sender according to its estimated network congestion degree, and it is the flow control from the sender .

   (1) Slow start principle

     1 ) When the host starts to send data, if all the data bytes of the larger sending window are injected into the network immediately, it may cause network congestion due to unclear network conditions.

     2 ) A better method is to try it out, that is, gradually increase the congestion control window value of the sender from small to small

     3 ) Usually, the congestion window cwnd (congestion window) can be set to the value of the MSS of a maximum segment when the segment is just started to be sent . After each acknowledgment of a new segment is received, the congestion window is increased by at most one MSS value. When the rwind (receive window) is large enough, another variable is needed to prevent the increase of the congestion window cwind from causing network congestion. --- Slow start threshold ssthresh

 

    (2) Congestion control

 The specific process is:

      1 ) TCP connection initialization, set congestion window to 1

      2 ) Execute the  slow start algorithm: cwind grows exponentially, until cwind == ssthress to start executing the  congestion avoidance algorithm: cwnd grows linearly

       3 ) When the network is congested, update the ssthresh value to half of the ssthresh value before the congestion, reset cwnd to 1 , and execute according to step ( 2).

 

 

 

 

    (3) Fast retransmission and fast recovery

     A TCP connection is sometimes idle for a long time due to the timeout of the retransmission timer. Slow start and congestion avoidance cannot solve such problems well. Therefore, a congestion control method of fast retransmission and fast recovery is proposed.

     The fast retransmission algorithm does not cancel the retransmission mechanism , but in some cases earlier retransmits the lost segment (if the sender receives three repeated ACKs , it is determined that the packet is lost and retransmitted immediately lost segments without having to wait for the retransmission timer to expire).

    For example: M1, M2, M3 -----> M1, M3, if M2 is missing, the receiver will continue to send M2 repeated acknowledgments to the sender. When the sender receives three repeated acknowledgments from M2, it considers that the M2 message is lost and starts Fast retransmission mechanism, data is retransmitted, other data sent data is put into the queue, and normal transmission after fast retransmission is over.

     The fast recovery algorithm has the following two points:

     1 ) When the sender continuously receives three repeated acknowledgments from the receiver, it executes the " multiplication reduction " algorithm to halve the slow start threshold, which is to prevent network congestion.

     2 ) Since the sender now thinks that the network is probably not congested, it does not perform the slow start algorithm, but sets the cwnd (congestion window) value to the value after the slow start threshold is halved, and then starts to execute the congestion avoidance algorithm , so that Linear increase of the congestion window .

 

Note: retransmission mechanism

        Timeout retransmission is another important mechanism for the TCP protocol to ensure data reliability. The principle is to start a timer after a certain data is sent. If the ACK message of the sent datagram is not obtained within a certain period of time , then retransmission Send data until it is successfully sent.

 

 

‍‍The packetsituation

If repeated ACK numbers are received continuously, it means that the packet is lost, triggering the fast retransmission mechanism

 

        Seq: It is to tell the receiver: the data I send starts from seq.

        Ack: Just tell the receiver: I hope to receive the seq sequence number from the peer next time.

When the retransmitting host receives 3 duplicate ACKs from the sender, it assumes that the packet was indeed lost in transit and immediately sends a fast retransmission. Once fast retransmission is triggered, all other packets that are being transmitted are put into the queue and suspended for transmission until the fast retransmission packets are sent out.

The process is shown in the figure below:

 

 

TCP handshake three times, wave four times

 

There are 6 types of TCP flags:

SYN (synchronous establishment of connection)

ACK (acknowledgement confirmation)

PSH (push transmission)

FIN (finish end)

RST (reset reset)

URG (urgent emergency)

 

recommend

Finally, recommend a good TCP/IP protocol stack LwIP protocol stack, a portable protocol stack

I have source code on my github, address: https://github.com/manmao/lwip_contrib.git

LwIP official website at: https://savannah.nongnu.org/projects/lwip/

 

 

 

 

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324403022&siteId=291194637