[Illustration] Evolution from HTTP/1.1 to HTTP/2.0


HTTP/1.1

HTTP/1.1 performance improvements over HTTP/1.0:

  • The use of TCP long connections improves the performance overhead caused by HTTP/1.0 short connections.

  • It supports pipeline network transmission. As long as the first request is sent, the second request can be sent out without waiting for it to come back, which can reduce the overall response time.

HTTP2

HTTP/1.1 performance issues

  • It is difficult to reduce the delay . Although the "bandwidth" of the network has increased compared to before, it is difficult to reduce the delay after the delay has dropped to a certain extent. In other words, the lower limit of the delay has been reached;

  • Concurrent connections are limited , the maximum number of concurrent connections in Google Chrome is 6, and each connection needs to go through TCP and TLS handshake time, and the impact of TCP slow start process on streaming;

  • Head of queue blocking problem , the same connection can only process the next transaction after completing one HTTP transaction (request and response);

  • The HTTP header is huge and repetitive . Since the HTTP protocol is stateless, each request must carry the HTTP header, especially for the header that carries the cookie, and the size of the cookie is usually very large;

  • Server push messages are not supported , so when the client needs to get notifications, it can only continuously pull messages through timers, which undoubtedly wastes a lot of bandwidth and server resources.

Although there are so many optimization methods for the HTTP/1.1 protocol, the effect is still unsatisfactory, because these methods are all optimized for the "external" of the HTTP/1.1 protocol, and some key places are unable to do so. Optimization, such as the request response model, huge and repetitive headers, time-consuming concurrent connections, inability of the server to actively push, etc. To change these, the HTTP protocol must be redesigned, so HTTP/2 came out!

Optimization of HTTP2.0:

1. Header compression

The message of the HTTP protocol is composed of "Header + Body". For the Body part, the HTTP/1.1 protocol can use the header field "Content Encoding" to specify the compression method of the Body, such as gzip compression, which can save bandwidth. But another part of the header in the packet is not optimized for it.

Problems existing in the Header part of HTTP/1.1 packets:

  • Contains many fixed fields, such as Cookie, User Agent, Accept, etc. These fields add up to hundreds of bytes or even thousands of bytes, so it is necessary to compress ;

  • A large number of request and response packets have repeated field values, which will cause large bandwidth to be occupied by these redundant data, so duplication must be avoided ;

  • The field is ASCII encoded, although it is easy for human observation, but the efficiency is low, so it is necessary to change to binary encoding ;

HTTP/2 has made a major transformation to the Header part to solve the above problems.

HTTP/2 does not use the common gzip compression method to compress the header, but develops the HPACK algorithm. The HPACK algorithm mainly includes three components:

  • static dictionary;

  • dynamic dictionary;

  • Huffman encoding (compression algorithm);

Both the client and the server will establish and maintain a " dictionary ", which uses a shorter index number to represent repeated strings, and then uses Huffman encoding to compress the data, which can achieve a high compression rate of 50% to 90% .

2. Binary frame

The great thing about HTTP/2 is that it changes the text format of HTTP/1 to binary format to transmit data, which greatly improves the HTTP transmission efficiency, and binary data can be efficiently parsed using bit operations.

You can see the difference between HTTP/1.1 responses and HTTP/2 responses from the image below:
insert image description here

HTTP/2 divides the response message into two frames . HEADERS (first part) and DATA (message payload) in the figure are the types of frames, that is to say, an HTTP response is divided into two frames. to transmit and use binary encoding.

The structure of HTTP/2 binary frame is as follows:

[External link image transfer failed, the source site may have an anti-leech mechanism, it is recommended to save the image and upload it directly (img-fRPIo2TY-1645449674274) (C:\Users\DB\AppData\Roaming\Typora\typora-user-images\ image-20220221171934404.png)]

One byte after the frame length is the type of the frame . HTTP/2 defines a total of 10 types of frames, which are generally divided into two types: data frames and control frames , as shown in the following table:

[External link image transfer failed, the source site may have an anti-leech mechanism, it is recommended to save the image and upload it directly (img-St4nHmgm-1645449674275) (C:\Users\DB\AppData\Roaming\Typora\typora-user-images\ image-20220221172030529.png)]

A byte after the frame type is a flag bit , which can save 8 flag bits to carry simple control information, such as:

  • END_HEADERS indicates the end of header data, which is equivalent to the blank line (“\r\n”) after the HTTP/1 header;

  • END_STREAM indicates the end of one-way data transmission, and there will be no more data frames in the future.

  • PRIORITY indicates the priority of the stream;

The last 4 bytes of the frame header are the stream ID (Stream ID), but the highest bit is reserved and only 31 bits can be used, so the maximum value of the stream ID is 2^31, which is about 21 It is used to identify which Stream the frame belongs to, and the receiver can find the frames with the same Stream ID from the out-of-order frames according to this information, so as to assemble the information in an orderly manner.

The last part is the frame data , which stores the HTTP header and body compressed by the HPACK algorithm .

3. Concurrent transmission (multiplexing)

In HTTP/2 , multiple requests or responses can be made concurrently in one connection, instead of one-to-one correspondence .

After knowing the frame structure of HTTP/2, let's take a look at how it implements concurrent transmission .

We all know that the implementation of HTTP/1.1 is based on the request-response model. In the same connection, HTTP completes a transaction (request and response) before processing the next transaction, that is to say, in the process of issuing a request and waiting for a response, there is no way to do other things, if the response is delayed , then subsequent requests cannot be sent, which also causes the problem of head-of-line blocking .

HTTP/2 is very compelling. Through the design of Stream , multiple Streams reuse one TCP connection to achieve the effect of concurrency , which solves the problem of HTTP/1.1 head-of-line blocking and improves the throughput of HTTP transmission.

In order to understand how the concurrency of HTTP/2 is implemented, let's first understand the three concepts of Stream, Message, and Frame in HTTP/2.

[External link image transfer failed, the source site may have anti-leech mechanism, it is recommended to save the image and upload it directly (img-mLWX4H01-1645449674276) (C:\Users\DB\AppData\Roaming\Typora\typora-user-images\ image-20220221175332285.png)]

You can see from the image above:

  • A TCP connection contains one or more Streams, and Streams are the key technology of HTTP/2 concurrency;

  • Stream can contain one or more Messages, which correspond to requests or responses in HTTP/1, and consist of HTTP headers and packet bodies;

  • The message contains one or more frames. Frame is the smallest unit of HTTP/2 and stores the content (header and body) in HTTP/1 in binary compression format;

Therefore, we can draw two conclusions: an HTTP message can be composed of multiple frames, and a frame can be composed of multiple TCP packets.

On HTTP/2 connections, frames of different Streams can be sent out of order (so different Streams can be concurrently sent ) , because the header of each frame will carry Stream ID information, so the receiver can assemble them in order by Stream ID. HTTP messages, and frames within the same Stream must be strictly ordered .

Both the client and the server can create a Stream , and the Stream ID is also different. The Stream created by the client must be an odd number, while the Stream created by the server must be an even number.

The concurrency achieved by HTTP/2 through Stream is much more pressing than that achieved by HTTP/1.1 through TCP connection, because when HTTP/2 implements 100 concurrent Streams, only one TCP connection needs to be established, while HTTP/1.1 requires Establishing 100 TCP connections, each of which goes through the TCP handshake, slow start, and TLS handshake process, is time-consuming.

HTTP/2 can also set different priorities for each stream . The "flag" in the frame header can set the priority. For example, when the client accesses HTML/CSS and image resources, it is hoped that the server will pass HTML/CSS first, and then To upload images, you can set the priority of Stream to improve user experience.

4. The server actively pushes resources

HTTP/1.1 does not support the server to actively push resources to the client. Only after the client initiates a request to the server, the resource can be obtained from the server.

For example, the client obtains the HTML file from the server through an HTTP/1.1 request, and the HTML may also need to rely on CSS to render the page. At this time, the client has to initiate another request to obtain the CSS file, which requires two messages Round trip, as shown in the left part of the figure below:

[External link image transfer failed, the source site may have anti-leech mechanism, it is recommended to save the image and upload it directly (img-CxYXM2Ai-1645449674277) (C:\Users\DB\AppData\Roaming\Typora\typora-user-images\ image-20220221180129606.png)]

As shown in the right part of the figure above, in HTTP/2, when the client accesses HTML, the server can directly and actively push the CSS file, which reduces the number of message transmissions.

Guess you like

Origin blog.csdn.net/weixin_60297362/article/details/123056292