Why does TCP have poor transmission efficiency in a network with high latency and packet loss?

Explanation: A student asked by private message, why does TCP have poor transmission efficiency in a network with high latency and packet loss? Google can find a lot of information, here is a translation of the first chapter of some IBM Aspera fasp technical white papers for reference .


-
In this digital world, the fast and reliable movement of digital data, including massive data transfers on a global scale, has become critical to business success in almost all industries.

-
However, the traditional TCP protocol has inherent performance bottlenecks, especially on high bandwidth networks with high round trip time (RTT) and packet loss.

TCP's inherent transport performance bottleneck is primarily caused by TCP's additive increase/multiplicative decrease ( AIMD ) congestion avoidance algorithm, which slowly probes the network's available bandwidth, increases the transmission rate until packet loss is detected, and then exponentially Reduce the transfer rate.

-

This congestion algorithm of TCP is designed to avoid the overall congestion of the Internet, because in the early days of the Internet, data transmission networks were all based on fixed cable networks. Packet loss during transmission can be considered 100% as the transmission channel is congested. However, in today's network conditions, wireless transmission networks such as WIFI/mobile cellular networks have a natural possibility of packet loss, and these other packet losses unrelated to network congestion also reduce the transmission rate.

-

In fact, the TCP AIMD algorithm itself will also cause packet loss, resulting in network bottlenecks. When increasing the transfer rate until a loss occurs, AIMD probes the available bandwidth too aggressively, resulting in packet loss. In some cases, this loss of packet loss due to aggressive probing bandwidth actually exceeds the loss from other causes (such as physical media or cross-traffic bursts) and converts a "lossless communication channel" with an unpredictable loss ratio. " becomes "unreliable channel".

-

The packet loss-based congestion control in TCP AIMD has a fatal impact on the end-to-end transmission throughput of the network: when a packet is lost and needs to be retransmitted, TCP drastically reduces sending data or even stops sending data to the receiving application until the retransmission acknowledgement. The transmission performance of all network applications will be affected by the congestion algorithm of TCP, but it is especially fatal for the transmission of large batches of data.

-

This coupling of reliability (retransmission) and congestion control in TCP creates a severe artificial throughput penalty for file transfers, which is detrimental to the performance of traditional TCP-based file transfer protocols (eg, FTP, HTTP, CIFS, NFS over WAN). Poor is evident.

-

The bar graph below shows the maximum throughput achievable under various packet loss and network latency conditions on an OC-1 (51 Mbps) link using TCP (xxxx shown) for file transfer technology. There is a strict theoretical limit on TCP connection throughput, which depends only on network RTT and packet loss. Note that adding more bandwidth does not change the TCP effective throughput. File transfer speeds have not improved, and expensive bandwidth has not been fully utilized.

End-to-end throughput of tcp under different packet loss rates and delays.png

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324660862&siteId=291194637