HTTP1.0, HTTP1.1, HTTP2.0, HTTPS detailed explanation

1. The difference between HTTP1.0 and HTTP1.1

HTTP1.0 was first used in web pages in 1996. At that time, it was only used on some relatively simple web pages and network requests, while HTTP1.1 was only widely used in current major browser network requests in 1999. At the same time, HTTP 1.1 is currently the most widely used HTTP protocol. The main differences are mainly reflected in:

1. Short and long connections

In HTTP/1.0, a short connection is used by default, which means that the connection must be re-established for each request. HTTP is based on the TCP/IP protocol. Each time a connection is established or disconnected requires the overhead of three handshake and four waves. If this is required for each request, the overhead will be relatively large. Therefore, it is best to maintain a long connection, which can be used to send multiple requests.

Since HTTP 1.1, long connections are used by default, and Connection: keep-alive is enabled by default. HTTP/1.1 continuous connection has non-pipelined and pipelined methods. The pipeline method is that the client can send a new request message before receiving the HTTP response message. The corresponding non-pipelined method is that the client can send the next request after receiving the previous response.

2. Error status response code

Added 24 error status response codes in HTTP1.1. For example, 409 (Conflict) indicates that the requested resource conflicts with the current state of the resource; 410 (Gone) indicates that a resource on the server is permanently deleted.

3. Cache processing

In HTTP1.0, If-Modified-Since and Expires in the header are mainly used as the criteria for caching judgment, HTTP1.1 introduces more cache control strategies such as Entity tag, If-Unmodified-Since, If-Match , If-None-Match and more optional cache headers to control the cache strategy.

4. Bandwidth optimization and use of network connection

In HTTP1.0, there are some phenomena of wasting bandwidth. For example, the client only needs a part of an object, but the server sends the entire object, and does not support the breakpoint resumable transmission function. HTTP1.1 is introduced in the request header In addition to the range header, it allows only a certain part of the resource to be requested, that is, the return code is 206 (Partial Content), which facilitates the free choice of developers to make full use of bandwidth and connection.

5. Host header processing

In HTTP1.0, it is believed that each server is bound to a unique IP address. Therefore, the URL in the request message does not convey the hostname. But with the development of virtual host technology, there can be multiple virtual hosts (Multi-homed Web Servers) on a physical server, and they share an IP address. Both the HTTP1.1 request message and response message should support the Host header field, and if there is no Host header field in the request message, an error (400 Bad Request) will be reported.
 
 

2. HTTP 和 HTTPS

1. Port

The HTTP URL starts from "http://" and uses port 80 by default, while the HTTPS URL starts from "https://" and uses port 443 by default.

2. Security and resource consumption

The HTTP protocol runs on top of TCP, and all transmitted content is plain text, and neither the client nor the server can verify the identity of the other party. HTTPS is the HTTP protocol that runs on top of SSL/TLS, and SSL/TLS runs on top of TCP. All transmitted content is encrypted. The encryption uses symmetric encryption, but the key of symmetric encryption is asymmetrically encrypted with the server's certificate. Therefore, HTTP is not as secure as HTTPS, but HTTPS consumes more server resources than HTTP.

  • Symmetric encryption: There is only one key, encryption and decryption are the same password, and the encryption and decryption speed is fast. Typical symmetric encryption algorithms are DES, AES, etc.;
  • Asymmetric encryption: the keys appear in pairs (and the private key cannot be derived from the public key, and the public key cannot be derived from the private key), encryption and decryption use different keys (public key encryption requires private key decryption, private key encryption requires public key Decryption), symmetric encryption is relatively slow. Typical asymmetric encryption algorithms include RSA and DSA.

Supplement: What is SSL/TLS protocol?

SSL "Secure Socket Layer" protocol and TLS "Secure Transport Layer" protocol are all encryption protocols, which protect privacy and data integrity during network data transmission. Ensure that the information transmitted by the network will not be intercepted or modified by unauthorized elements, thus ensuring that only legitimate senders and receivers can fully access and transmit information.

 
 

3. SPDY: Optimization of HTTP1.x

In 2012, Google put forward the SPDY solution, which optimized the request delay of HTTP1.X and solved the security of HTTP1.X, as follows:

1. Reduce latency

In response to the high latency of HTTP, SPDY elegantly adopts multiplexing. Multiplexing solves the problem of HOL blocking by sharing a tcp connection with multiple request streams, reduces latency and improves bandwidth utilization.

2. Request priority (request prioritization)

Multiplexing brings a new problem is that on the basis of connection sharing, it may cause critical requests to be blocked. SPDY allows priority to be set for each request so that important requests will be responded to first. For example, when the browser loads the homepage, the HTML content of the homepage should be displayed first, and then various static resource files, script files, etc. are loaded, so that users can see the content of the webpage the first time.

3. header compression

As mentioned earlier, HTTP1.x headers are often redundant. Choosing an appropriate compression algorithm can reduce the size and number of packets.

4. HTTPS-based encryption protocol transmission

Greatly improve the reliability of data transmission.

5. Server push

Webpages using SPDY, for example, my webpage has a request for sytle.css. When the client receives the sytle.css data, the server will push the sytle.js file to the client. When the client tries to get it again sytle.js can be obtained directly from the cache, no more requests are required.

SPDY composition diagram:

Insert picture description here

SPDY is located under HTTP and above TCP and SSL, so that it can be easily compatible with the old version of the HTTP protocol (the content of HTTP1.x is encapsulated into a new frame format), and the existing SSL functions can be used.
 
 

4. HTTP2.0: Upgraded version of SPDY

HTTP2.0 can be said to be an upgraded version of SPDY (in fact, it was originally designed based on SPDY), but there are still differences between HTTP2.0 and SPDY, as follows:

  1. HTTP2.0 supports plaintext HTTP transmission, while SPDY forces the use of HTTPS
  2. The compression algorithm of HTTP2.0 message header uses HPACK instead of DEFLATE used by SPDY
     
     

5. New features compared to HTTP2.0 and HTTP1.X

1. New binary format (Binary Format)

The parsing of HTTP1.x is based on text. The format analysis based on the text protocol has natural defects, and the expression of the text is diversified. There are bound to be many scenarios to consider the robustness. The binary system is different, and only the combination of 0 and 1 is recognized. Based on this consideration, the protocol analysis of HTTP2.0 decided to adopt the binary format, which is convenient and robust.

2. Multiplexing (MultiPlexing )

That is, connection sharing, that is, each request is used as a connection sharing mechanism. A request corresponds to an id, so that there can be multiple requests on a connection, and the requests of each connection can be randomly mixed together, and the receiver can assign the requests to different server requests according to the id of the request.

3. header compression

The header of HTTP1.x carries a large amount of information and must be sent repeatedly each time. HTTP2.0 uses an encoder to reduce the size of the header that needs to be transmitted. Both parties in the communication each cache a table of header fields, which not only avoids repeated header transmission, but also Reduce the size of the transfer.

4. Server push

Like SPDY, HTTP2.0 also has a server push function.

 
 

6. What is the difference between HTTP2.0 multiplexing and long connection multiplexing in HTTP1.X

  • HTTP/1.* One request-response, establish a connection, and close it when used up; each request must establish a connection;
  • The HTTP/1.1 Pipeling solution is: a number of requests are queued for serialized single-threaded processing, and subsequent requests wait for the return of the previous request to obtain a chance to execute. Once a request times out, subsequent requests can only be blocked, and there is no way. This is what people often call the end of the thread blocked;
  • Multiple HTTP/2 requests can be executed in parallel on a connection at the same time. A certain request task is time-consuming and will not affect the normal execution of other connections;

Insert picture description here

Insert picture description here
 
 

7. What is server push

Server push can send the resources required by the client to the client along with index.html, eliminating the need for the client to repeat the request. Because there are no operations such as initiating requests, establishing connections, etc., static resources can be pushed through the server to greatly increase the speed.

Ordinary client request process

Insert picture description here
 
Server push process
Insert picture description here
 
 

Reference:
https://mp.weixin.qq.com/s/GICbiyJpINrHZ41u_4zT-A?

Guess you like

Origin blog.csdn.net/weixin_43901865/article/details/112766275