QUIC protocol popular science introduction (1)

One: QUIC protocol import

        QUIC is a general transport layer network protocol originally designed by Jim Roskind of Google. It was implemented and deployed in 2012. It was publicly released in 2013 as the scope of the experiment expanded and was described to the IETF. Although it has long been in the Internet Draft stage, more than half of all connections from the Chrome browser to Google servers use QUIC. Microsoft Edge, Firefox, and Safari all support it, but it's not enabled by default. Its standardized version was officially launched in RFC9000.

        Before the QUIC standard protocol comes out, each manufacturer can develop its own version of QUIC based on UDP. Foreign QUIC versions such as Google's gquic_q023, gquic_q043, gquic_q046, Facebook's mvfstQUIC version, and domestic ones such as Tencent's WeChat QUIC version. Basically, the early customization players of QUIC were all large manufacturers, and only large manufacturers had such business customization needs.

        In June 2015, the Internet draft of the QUIC specification was submitted to the IETF for standardization. In 2016, the QUIC working group was established. In October 2018, the IETF's HTTP working group and the QUIC working group jointly decided to call the HTTP mapping on QUIC "HTTP/3" to make it a global standard in advance. In May 2021, IETF announced RFC9000, and the QUIC specification launched a standardized version. The QUIC version before the standard is released will not disappear because of this. It is expected that there will be coexistence for a long period of time until the standard version unifies the world.

Two: Introduction to QUIC protocol

        The full English name of QUIC is: Quick UDP Internet Connections. From the name, you can probably tell that the main word is fast, which is a new low-latency Internet transmission protocol based on UDP.

        Protocols based on TCP and UDP transport layers have been developed for many years. The protocol stack is integrated in the operating system and is already a very mature protocol. Why does Google want to find another way to develop a UDP-based transport protocol? What shortcomings does QUIC solve in existing protocols, and what are the advantages of QUIC itself?

        As we all know, after HTTP unified the world, it has experienced several major updated versions from the original HTTP/0.9 to HTTP/1.x, HTTP/2 and the latest HTTP/3. Before the HTTP/3 version, HTTP was based on the TCP protocol for upper-layer expansion and optimization. With the development of the mobile Internet, network interaction scenarios are becoming more and more abundant and require timeliness. The inherent performance bottlenecks and shortcomings of traditional TCP are in certain Demand is increasingly unable to be met in some scenarios for the following reasons:

1. The delay consumption caused by the essential handshake

        The three-way handshake used by TCP to establish a connection will inevitably bring about a delay consumption of 1 RTT (which can be understood as network delay). In addition, the TLS encryption protocol requires both parties to exchange encryption parameters for encryption, and it requires 2 RTTs. So HTTP2 needs 3 RTTs to establish a complete transmission link. For live broadcasting and Douyin brushing small videos, which require the first frame to start in seconds, the handshake delay is too large

2. Multiplexed head-of-line blocking

        In HTTP1.0 and HTTP1.1, the next request must be sent after the previous request returns, resulting in bandwidth not being fully utilized and subsequent requests being blocked (HTTP 1.1 attempts to use pipelining technology, but the innate FIFO (advanced The first-out) mechanism causes the execution of the current request to depend on the completion of the previous request, which easily causes the head of the queue to block and does not fundamentally solve the problem).

        Based on the previous version, HTTP2 has been modified to address the above scenarios and proposes multiplexing. HTTP2 redefines the underlying http semantic mapping, allowing request and response bidirectional data flows to be used on the same connection. The same domain name only occupies one TCP connection. The data stream (Stream) uses frames as the basic protocol unit, avoiding delays caused by frequent connection creation, reducing memory consumption, improving usage performance, parallel requests, and slow requests or first requests. The request sent does not block the return of other requests.

        Multiplexing solves the shortcomings of pipelining in HTTP 1.1, but multiplexing can only exert its greatest advantages under good network conditions where packet loss is not frequent. If a packet in the middle of one of the flows is lost, the data of other subsequent flows will be blocked, and other flows cannot continue to be transmitted until the lost flow data is retransmitted. Even if the receiving end has received the subsequent data packets, the HTTP protocol will not notify the application layer to process them. For HTTP 1.1, multiple TCP connections can be opened. If this happens, only one of the connections will be affected, and the remaining TCP connections can still transmit data normally. When the network is not good, the performance of HTTP 2 is not as good as HTTP 1.1.

3. The TCP protocol update lags behind

        The TCP protocol is integrated into the kernel of the operating system, which makes it difficult for some updates of the TCP protocol to be promoted quickly. Because you can't ask users to upgrade their operating systems because of TCP's new protocol features. QUIC implements packet loss recovery, congestion control, encryption and decryption, multiplexing and other functions based on UDP on the application layer. It puts the advantages of TCP compared to UDP into the implementation of the application layer, which can not only optimize the handshake delay, but also Completely solve the problem of kernel protocol update lag. If the promotion problem is solved, the vitality of the agreement will be more vibrant.

Based on the above shortcomings of TCP, QUIC has made comprehensive improvements:

1. Low latency connection

        QUIC connection establishment time is about 0~1 RTT, which has been optimized in two aspects:

        1) The transport layer uses UDP, which reduces the delay of one RTT three-way handshake.

        2) The encryption protocol uses the latest version of the TLS protocol, TLS 1.3. Compared with the previous TLS 1.1-1.2, TLS1.3 allows the client to start sending application data without waiting for the TLS handshake to complete, and can support 1 RTT and 0 RTT.

        For the QUIC protocol, the client's handshake negotiation for establishing a connection for the first time requires 1-RTT. When the client re-establishes a connection, it can use the previously negotiated cache information to restore the TLS connection, which only takes 0-RTT. Therefore, QUIC connection establishment time is mostly 0-RTT and very little 1-RTT. Compared with HTTPS's 3-RTT connection establishment time, it has great advantages.

2. Connection Migration

        A TCP connection is identified by a four-tuple (source IP, source port, destination IP, destination port). What is connection migration? That is, when any of the elements changes, this connection is still maintained, which can keep the business logic from being interrupted. Of course, the main concern here is the change of the client, because the client is uncontrollable and the network environment changes frequently, while the IP and port of the server are generally fixed. For example, after work, you make a call to your long-distance girlfriend on the road. When you walk in front of your house, you will automatically connect to the home wifi. At this time, the WeChat call will be briefly interrupted. This is because when switching between WIFI and 4G/5G mobile networks, the client's The IP will change and the TCP connection with the server needs to be re-established.

        QUIC supports connection migration. It uses a (usually a 64-bit random number) ConnectionID to identify the connection. In this way, even if the source IP or port changes, as long as the ConnectionID is consistent, the connection can be maintained and no reconnection will occur.

3. No queue head blocking

        QUIC supports multiplexing. Compared with HTTP/2, QUIC's streams are completely isolated and have no timing dependence on each other. If packet loss occurs in a certain flow, it will not block the transmission and application layer processing of other flow data, so this solution will not cause queue head blocking.

4. Flexible congestion control

        QUIC's transmission control no longer relies on the kernel's congestion control algorithm, but is implemented on the application layer, which means that we implement and configure different congestion control algorithms and parameters according to different business scenarios. Users can plug-in and plug-in congestion control algorithms such as Cubic, BBR, and Reno, or customize private algorithms according to specific scenarios.

5. Forward Error Correction (FEC)

        QUIC supports forward error correction. In a weak network packet loss environment, dynamically adding some FEC data packets can reduce the number of retransmissions and improve transmission efficiency.

Guess you like

Origin blog.csdn.net/qq_27071221/article/details/132777852