The difference between http1.0, 1.1 (1.x) and 2.0

The difference between http1.0, 1.1 and 2.0

1, 1.0 given 1.1

1. Cache processing

  • In HTTP1.0, If-Modified-Since and Expires in the header are mainly used as the criteria for caching judgment.
  • HTTP 1.1 introduces more cache control strategies such as Entity tag, If-Unmodified-Since, If-Match, If-None-Match and more optional cache headers to control the cache strategy.

2. Bandwidth optimization and use of network connection

  • In HTTP1.0, there are some phenomena of wasting bandwidth . For example, the client only needs a part of an object, but the server sends the entire object, and does not support the function of resumable transmission.
  • HTTP 1.1 introduces the range header field in the request header, which allows only a certain part of the resource to be requested, that is, the return code is 206 (Partial Content), which facilitates the free choice of developers in order to make full use of bandwidth and connection

3. Management of error notifications

  • Added 24 error status response codes in HTTP1.1 . For example, 409 (Conflict) indicates that the requested resource conflicts with the current state of the resource; 410 (Gone) indicates that a resource on the server is permanently deleted.

4. Host header processing

  • In HTTP1.0, it is believed that each server is bound to a unique IP address . Therefore, the URL in the request message does not convey the hostname. But with the development of virtual host technology, there can be multiple virtual hosts (Multi-homed Web Servers) on a physical server, and they share an IP address.
  • Both the HTTP1.1 request message and response message should support the Host header field, and if there is no Host header field in the request message, an error (400 Bad Request) will be reported.

5. Long connection

  • HTTP 1.1 supports persistent connection (PersistentConnection) and request pipeline (Pipelining) processing . Multiple HTTP requests and responses can be transmitted on a TCP connection, reducing the consumption and delay of establishing and closing connections. Connection is enabled by default in HTTP1.1 : Keep-alive, to a certain extent, makes up for the shortcomings of HTTP1.0 that each request has to create a connection.

2. Optimization of 1.x

In 2012, Google proposed the SPDY solution, which optimized the request delay of HTTP1.X and solved the security of HTTP1.X. The details are as follows:

  1. To reduce latency , SPDY elegantly adopts multiplexing (multiplexing) in response to the high latency of HTTP. Multiplexing solves the problem of HOL blocking by sharing a tcp connection with multiple request streams, reduces latency and improves bandwidth utilization.

  2. Request priority (request prioritization). Multiplexing brings a new problem is that on the basis of connection sharing, it may cause critical requests to be blocked. SPDY allows setting priorities for each request so that important requests will be responded to first. For example, when the browser loads the homepage, the html content of the homepage should be displayed first, and then various static resource files, script files, etc. are loaded, so as to ensure that users can see the content of the webpage the first time.

  3. header compression . As mentioned earlier, HTTP1.x headers are often redundant. Choosing an appropriate compression algorithm can reduce the size and number of packets.

  4. The encrypted protocol transmission based on HTTPS greatly improves the reliability of data transmission.

  5. Server push (server push) uses SPDY webpages. For example, my webpage has a request for sytle.css. When the client receives the sytle.css data, the server pushes the sytle.js file to the client. On the other hand, when the client tries to get sytle.js again, it can get it directly from the cache without sending any more requests.

    SPDY is located under HTTP and above TCP and SSL, so that it can use the existing SSL functions while being compatible with the old version of the HTTP protocol.

3. HTTP2.0: SPDY upgraded version

HTTP2.0 is also designed based on SPDY, but it is still different from SPDY:

Compared with 1.x, the new features of 2.0

1. New binary format

The parsing of HTTP1.x is based on text. The format analysis based on the text protocol has natural flaws, and the manifestation of the text is diversified. There are bound to be many scenarios for robustness considerations.

HTTP2.0 protocol analysis adopts binary format, which is different from text. The binary format only recognizes the combination of 0 and 1, which is convenient and robust.

2. MultiPlexing

Connection sharing, that is, each request is used as a connection sharing mechanism. A request corresponds to an id, so that there can be multiple requests on a connection, and the requests of each connection can be randomly mixed together, and the receiver can assign the requests to different server requests according to the id of the request.

3. Header compression

The header of HTTP1.x contains a lot of information, and it must be sent repeatedly each time, while HTTP2.0 uses an encoder to reduce the size of the header that needs to be transmitted, and both parties in the communication each cache a table of header fields, which avoids the transmission of repeated headers. , Which reduces the size of the transmission.

4. Server push

Like SPDY, HTTP2.0 also has a server push function.

Attachment: some differences between HTTPS and HTTP

  1. The HTTPS protocol requires a CA certificate (digital certificate), which generally requires payment
  2. The HTTP protocol runs on TCP, all transmitted content is transmitted in plain text, HTTPS runs on SSL/TLS, and SSL/TLS runs on TCP, and all transmitted content is encrypted.
  3. HTTP and HTTPS use completely different connection methods and use different ports. HTTP is port 80 and HTTPS is port 443;
  4. HTTPS can effectively prevent operator hijacking.

Guess you like

Origin blog.csdn.net/weixin_40849588/article/details/96779947