The main difference between HTTP1.0 and HTTP1.1 and HTTP2.0


Let me start with the conclusion, which is often asked in interviews, and it is also some new mechanisms that we must understand to learn the HTTP protocol. We often learn some basics of HTTP1.0, 1.1 2.0 is also commonly used in enterprises, and has also optimized a lot of nb Attributes, let's take a brief look at them below!

The difference between HTTP1.0 and 1.1

Long, short connection keep-alive, pipeline mode

HTTP is based on the TCP/IP protocol. Every time a connection is established or disconnected, the overhead of three handshakes and four waved hands is required . If the connection needs to be re-established for each request, the overhead will be relatively large. Therefore, it is best to maintain a long connection , and the same c-side can use this "long connection" to send multiple requests.

  • In HTTP/1.0 , a short connection is used by default , and a connection must be re-established for each request ;
  1. The two parties establish a connection
  2. C side sends request information
  3. The S terminal returns the response information
  4. Both parties close the connection
  • Starting from HTTP 1.1 , the keep-alive mechanism-"long connection" is used by default , and Connection is enabled by default: keep-alive header field (close means close);

The "long connection" mechanism of HTTP/1.1 has two working modes: non-pipelined and pipelined .

  • The pipeline method is that the client can send a new request message before receiving the HTTP response message (continuously push requests to the past, just like the factory assembly line, let the S end process ) . Of course, the HTTP server S must respond in the order of the client's request to ensure that the client can distinguish the response content of each request.
  • The non-pipelined approach is where the client cannot send the next request until the previous response has been received .

Notice:

  • The reason for adding double quotes to "long connection" is that HTTP is stateless, so there is no difference between long and short connections. It essentially disconnects every time it comes and goes, but the keep-alive header field introduces a keep-alive mechanism. Make him look like a "long link"!

Added the Host header attribute indicating a host within the IP

  • HTTP1.0 believes that each server is bound to a unique IP address, but with the development of virtual host technology, multiple virtual hosts (Multi-homed Web Servers) can exist on a physical server , and they share an IP address . Then we need to distinguish these hostnames

  • Both HTTP1.1 request messages and response messages should support the Host header attribute , which will identify the specific host within the requested domain name , and if there is no Host header attribute in the request message, an error (400 Bad Request) will be reported .

Add response codes 409,410 for error status

More specific information like the client reflecting the request error;

like:

  • 409 (Conflict) indicates that the requested resource conflicts with the current state of the resource;

  • 410 (Gone) indicates that a resource on the server has been permanently deleted.

Saving bandwidth, breakpoint resume function range header

  • In HTTP1.0, there are some phenomena of wasting bandwidth. For example, the client only needs a part of an object, but the server sends the entire object , and does not support the function of resuming upload ;

  • .HTTP1.1 introduces the range header attribute in the request header , which 1. allows only a certain part of the resource to be requested , that is, the return code is 206 (Partial Content), which is convenient for developers to choose freely so as to make full use of bandwidth and connect. 2. It can realize the resuming of multi-threaded download

  • When HTTP1.1 is used, the client can first send a request with only the Host header attribute . If the server receives this request and returns a response code of 100 , the client can continue to send a complete request with entities . The reasons are as follows:

The use of the 100 (Continue) status code allows the client to use the request header to test the server before sending the request message body, to see if the server will receive the request body, and then decide whether to send the request body.

Cache processing Cache-Control header

There are many kinds! Simple understanding of Cache-Control:max-age=N: N is how many seconds the cache validity period is.
For example, our C side requests the same picture or audio resource twice from the S side:

  1. Our local cache on the c-side may not have expired, so use it directly;
  2. If it expires, send a request to the s end, and the s end detects whether the resource has changed; the change returns a status code of 200, and the new resource is cached locally in C. If there is no change, the 304 status code is returned directly to remind the c end to extend the life of its local cache, and directly use;
  3. insert image description here

The difference between HTTP1.1 and 2.0

multiplexing

HTTP2.0 uses multiplexing technology to allow the same socket connection to process multiple requests concurrently , and the number of concurrent requests is several orders of magnitude larger than HTTP1.1. ( HTTP1.1 can also establish several more TCP connections to support processing more concurrent requests , but creating a TCP connection itself also has overhead.)
insert image description here

header header data compression

HTTP1.1 does not support the compression of header data. HTTP2.0 uses the HPACK algorithm to compress header data , so that the data volume is small and the transmission on the network will be faster.

In HTTP1.1, HTTP requests and responses are composed of three parts: status line (first line), request/response header (header), and message body (body).
Generally speaking, the body of the message is compressed by gzip , or the compressed binary file itself is transmitted , but the status line and header are not compressed , and are directly transmitted in plain text.
As Web functions become more and more complex, the number of requests generated by each page is also increasing, resulting in more and more traffic consumed in the header , especially when User-Agent and Cookie are transmitted every time. Frequently changing header fields are a complete waste.

server push server push

In order to improve the delay, HTTP2.0 introduces server push, which allows the server to actively push resources to the browser before the browser explicitly requests, so that the client does not create a connection again to send a request to the server for acquisition. In this way, the C-side can directly load these resources locally without going through the network.

Server-side push is a mechanism for the S-side to actively send data before the C-side requests (note that it is different from the 1.1 long-link pipeline mode C-side continuous push request).
A web page uses many resources: HTML, style sheets, scripts, images, and so on. In HTTP1.1, if the S-side wants to send these resources to the C-side, each resource must receive an explicit request from the C-side . (For example, two copies of html+css need to be applied twice by the C-side, and the S-side can send them all in turn); this is a very slow process.

Guess you like

Origin blog.csdn.net/wtl666_6/article/details/128697770