TCP congestion control and flow control

  • congestion control
  • flow control

congestion control

a) The bandwidth in the congested
computer network, the cache in the switching node and the processing mechanism are called network resources. When the demand for network resources exceeds the supply in a certain period of time, the performance of the network will deteriorate, which is called such The condition is congestion. In fact, it can be compared to road driving in life. No matter how many lanes there are, there will always be traffic jams during peak hours or due to traffic accidents. The country can limit the amount of travel through odd and even numbers, but network users obviously cannot be restricted. Therefore, when the number of users cannot be restricted and resources are limited, the service quality can only be reduced to provide services for each user.
We need to understand that congestion control is a dynamic process, such as increasing network resources. Obviously, such static methods cannot effectively control dynamic situations. It's easy to figure out that adding lanes by analogy doesn't keep your peers clear.
Congestion control is a global process, it is a point-to-point control unlike the flow control mentioned below.
b) Congestion Control
The congestion control of TCP consists of four algorithms: slow start, congestion avoidance, fast retransmission, and fast recovery.
The sender maintains a state quantity of a congestion window, and the size of the state depends on the degree of network congestion. And dynamic changes, the sender makes the size of the sending window equal to the size of the congestion window, but considering the receiver's acceptance window, the sending window may be smaller than the size of the congestion window.
In short, the congestion window reflects the carrying capacity of the network, but the actual amount you send still needs to consider the ability of your recipient.
a) At the beginning of slow start
, the sender obviously does not know the current utilization of network resources, so a detection process is required. Although it is said to be slow start, the actual growth is indeed exponential, not slow. This is from the 1 starts to grow, and the starting point is relatively low.
For example, at the beginning 1 2 4 8 16... If the bandwidth is m, obviously log2(w) time can be satisfied.
b) Congestion avoidance
Obviously, when you use slow start to grow, it will block in a very short time. Slow start ensures that network resources can be used quickly. TCP sets a critical value. When such a critical value is reached When , use linear growth to grow, that is, to enter the stage of congestion avoidance. For most TCOs, such a value is 65535, and linear growth is used to slowly approach the optimal value of the network.
Remarks: When the critical value is reached, the segment will be sent tentatively. If the server does not respond, it is considered that the network status is not good, and the critical value needs to be lowered, or directly start from 1.
c) Fast retransmission
In the fast retransmission mechanism, when the sender receives three consecutive repeated acknowledgments, it should immediately retransmit the packets that the other party has not received, without waiting for the set retransmission timer to expire. Such an approach is obviously an optimization of transmission efficiency.
d) Fast recovery
In fact, fast recovery does not exist alone, it is the follow-up processing of fast retransmission. We know above that when three repeated acknowledgments are received in a row, retransmission is required, but if there are more repeated acknowledgments, it is obvious that the current network transmission efficiency is problematic, and a large number of packet losses have occurred.
Fast recovery is to readjust the congestion. An algorithm for windows.
1) When the sender receives three consecutive acknowledgments, the window is halved, and the multiplication reduction algorithm is used.
2) Considering that three consecutive repeated acknowledgments can be received, the network does not enter congestion, so it enters the congestion avoidance stage.

How to detect congestion?
TCP sets a retransmission timer (RTO) for each message segment. When the RTO times out and has not received an acknowledgment, it retransmits. Obviously, the RTO times out and does not receive an acknowledgment. At the same time, it does not receive three consecutive acknowledgment retransmissions. It can be inferred that the network enters congestion and packet loss occurs.
This is multiplicative reduction, and packet loss, set to 1 to restart with slow start.
The overall congestion window design principle of TCP is that the product decreases and the sum increases.
Why, without receiving an ACK, the congestion window will increase by 1?
In a period of time, the data packets in the network transmission are constant. When an ACK is received, it means that the old packet leaves, and the new packet can be processed, so the congestion window is increased by 1.

flow control

a) In order to improve the utilization rate of the communication pipeline, the sliding window
TCP protocol does not use the stop-waiting protocol, but adopts the ARQ protocol, which can send several packets continuously and then wait, instead of sending one packet and then waiting.
Both the sender and receiver of TCP maintain two sliding windows.
b) Flow
control The purpose of flow control is to prevent the sender from transmitting too fast so that the receiver has time to receive data. Through our understanding of the sliding window, we understand that the mechanism of the sliding window can be used to achieve the flow control mechanism. The sender sends The size of the window cannot be larger than the size of the receiver's window .
Special case: If the receiver does not have enough buffers to use, it will send a zero window size message. At this time, the size of the sender's window will be set to 0 according to the regulations. The sender will obviously stop sending, and then the receiver has If there is enough buffer, a packet with a non-zero window size is sent, but the packet is lost in the middle, so the sender obviously sends the window size will be 0, unable to send data, resulting in deadlock.
In order to solve such a problem, obviously when receiving a zero-window message from the receiving end, the sending end needs to know when I can continue to send. At this time, the TCP protocol stipulates that when a zero-window message is received, the receiving end will periodically Send a zero-window detection packet (even if it is a zero-window, it must accept zero-window detection packets, confirmation packets, emergency packets)
c) Transmission efficiency and Nagle algorithm
TCP transmission data is divided into interactive data and block data , the interactive data is generally the command of the interactive data, which is generally small. Then the utilization rate of the network is very small.
Data transmission uses Nagle's algorithm: it is stipulated that a TCP connection can only have at most one unacknowledged incomplete packet, and no other packets can be sent until the confirmation of the packet arrives.
This is likely to cause such a problem. When the receiver's cache is full, the interactive reference program only reads one byte from the cache at a time, and then sends an acknowledgement message to the sender. Obviously, the size of the sliding window at this time is 1, the efficiency is relatively low, and this phenomenon is called confused window syndrome.
In order to solve such a problem, we can naturally think that we can send the response when the cache is used more, so that the size of the sliding window will be larger, so we can choose to let the receiver wait for a period of time before sending the response (delayed response). ).

For a detailed explanation of TCP, please refer to the following blog.
https://blog.csdn.net/rock_joker/article/details/76769404

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325483912&siteId=291194637