Detailed explanation of HTTP2.0

Preface

HTTP2.0 greatly improves web performance, and on the basis of HTTP1.1's full semantic compatibility, it further reduces network latency. Achieve low latency and high throughput. For front-end developers, the optimization work is reduced. This article will focus on the role of the following new features, work process and how to better complete the optimization work to introduce HTTP 2.0

  • Binary framing
  • Header compression
  • Multiplexing
  • Request priority
  • Server push

1. Introduction

HTTP/2 is the first update of the HTTP protocol since HTTP1.1 was released in 1999, and is mainly based on the SPDY protocol.

1.1 What is the SPDY protocol

SPDY is the nickname of Speedy, meaning "faster". It is an application layer protocol based on TCP protocol developed by Google. The goal is to optimize the performance of the HTTP protocol and shorten the loading time of web pages and improve security through technologies such as compression, multiplexing, and priority. The core idea of ​​the SPDY protocol is to minimize the number of TCP connections. SPDY is not a protocol used to replace HTTP, but an enhancement to the HTTP protocol.

1.2 Disadvantages of HTTP1.X

Anything is updated to make up or fix some of the problems of the previous version, so let's take a look at the shortcomings of HTTP1.x that we want to use HTTP2.0.

HTTP1.x has the following main disadvantages:

  • HTTP/1.0 only allows one request to be initiated on one TCP connection at a time, and the pipeline technology used by HTTP/1.1 can only partially process concurrent requests, and there will still be the problem of queue head blocking. Therefore, when the client needs to initiate multiple requests, usually Will use the establishment of multiple connections to reduce the delay.
  • One-way requests can only be initiated by the client.
  • The header information of the request message and the response message has a large amount of redundancy.
  • The data is not compressed, resulting in a large amount of data transmission

We can use a link to compare how much faster HTTP2.0 is than HTTP1.x. link address

2. Binary Framing

Without changing the semantics, methods, status codes, URL and header fields of HTTP1.x, how does HTTP2.0 break through the performance limitations of HTTP1.1, improve transmission performance, and achieve low latency and high throughput? One of the keys is to add a binary framing layer between the application layer (HTTP) and the transport layer (TCP).

When sorting out binary framing and its role, let's pave the way for a bit of knowledge about frames:

  • Frame: The smallest unit of HTTP2.0 communication. All frames share an 8-byte header, which contains the length, type, flag, and a reserved bit of the frame, and at least has an identifier that identifies the stream to which the current frame belongs , Frames carry specific types of data, such as HTTP headers, payloads, and so on.

  • Message: A communication unit larger than a frame refers to a logical HTTP message, such as request, response, etc. Consists of one or more frames

  • Flow: A communication unit larger than the message. It is a virtual channel in a TCP connection that can carry two-way messages. Each stream has a unique integer identifier

The core of all enhanced performance in HTTP2.0 is binary transmission. In HTTP1.x, we transmit data through text. There are many shortcomings in the transmission of data based on text, and the presentation of text is diverse, so there must be many scenarios for robustness considerations, but binary is different, only a combination of 0 and 1, so binary transmission is selected to achieve Convenient and robust.

A new encoding mechanism is introduced in HTTP2.0, and all transmitted data will be divided and encoded in a binary format.

In order to ensure that HTTP is not affected, it is necessary to add a binary framing layer between the application layer (HTTP2.0) and the transport layer (TCP or UDP).在二进制分帧层上,HTTP2.0会将所有传输的信息分为更小的消息和帧,并采用二进制格式编码,其中HTTP1.x的首部信息会被封装到Headers帧,而Request Body则封装到Data帧。

3. First Compression

HTTP 1.1 does not support HTTP header compression, so SPDY and HTTP 2.0 appeared. SPDY uses the DEFLATE algorithm, while HTTP2.0 uses the HPACK algorithm specifically designed for the first compression.

Each HTTP communication (request or response) will carry header information to describe resource attributes.

In HTTP1.0, we use the text format to transmit the header. If the cookie is carried in the header, hundreds to thousands of bytes need to be repeatedly transmitted each time, which is really a lot of overhead.

In HTTP2.0, we use the HPACK (HTTP2 Header Compression Algorithm) compression format to encode the transmitted header, reducing the size of the header. An index table is maintained at both ends to record the headers that have appeared. Later in the transmission process, the key names of the headers that have been recorded can be transmitted. After the opposite ends receive the data, the corresponding values ​​can be found through the key names.

Four. Multiplexing

In HTTP1.x, we often use Sprite maps, use multiple domain names, etc. to optimize, all because the browser limits the number of requests under the same domain name. When the page needs to request a lot of resources, the head of the line Blocking (Head of line blocking) will cause when the maximum request is reached, the resource needs to wait for other resource requests to complete before continuing to send.

In HTTP2.0, based on the binary framing layer, HTTP2.0 can send requests and responses at the same time on the basis of a shared TCP connection. HTTP messages are decomposed into independent frames, without destroying the semantics of the message itself, and sent out in a staggered manner. At the other end, they are reassembled according to the stream identifier and header. Through this technology, the head-of-line blocking problem of the old version of HTTP can be avoided, and the transmission performance can be greatly improved.

V. Request priority

After the HTTP message is divided into many independent frames, the performance can be further optimized by optimizing the interleaving and transmission sequence of these frames.

Six. Server Push

A powerful new feature of HTTP2.0 is that the server can send multiple responses to a client request. The server pushes resources to the client without an explicit request from the client.

The server returns multiple responses in advance according to the client's request, and pushes additional resources to the client. As shown in the figure below, the client requests stream 1 (/page.html). The server pushes stream 2 (/script.js) and stream 4 (/style.css) while returning the message of stream 1.

Server push is a mechanism for sending data before the client requests it. In HTTP2.0, the server can send multiple responses to a client's request. If a request is sent by your homepage, the server may respond to the content, logo, and style sheet of the homepage because it knows that the client will use these things. This not only reduces the redundant steps of data transmission, but also speeds up page response and improves user experience.

Disadvantages of push: all pushed resources must comply with the same-origin policy. In other words, the server cannot push third-party resources to the client casually, but must be confirmed by both parties.

Original link: https://blog.csdn.net/yexudengzhidao/article/details/98207149

Guess you like

Origin blog.csdn.net/qq_43518425/article/details/115315785