http1.0、http1.1 http 2.0

HTTP/1.0 is a stateless, connectionless application layer protocol.

no connection

No connection: A connection must be established for each request, and a long connection needs to be established with the keep-alive parameter. The default long connection keep-alive of HTTP1.1
  cannot reuse the connection. Every time a request is sent, a TCP connection is required, and the TCP connection is released. More troublesome, will lead to low network utilization

head of line blocking

Head of line blocking (head of line blocking), because HTTP1.0 stipulates that the next request must be sent before the previous request response arrives, assuming that the previous request response has not arrived, then the next request will not be sent, and the subsequent request will be sent blocked.

cache

In HTTP1.0, the negotiation cache last-modified\if-modified-since in the header is mainly used, and the strong cache Expires is used as the criterion for caching judgment. Other problems HOST domain: It is considered that each server is
bound
  to a unique IP address, so in There is no host name in the URL of the request message, and HTTP1.0 has no host domain. Now there can be multiple virtual hosts on one server, and they share one IP address.

HTTP1.0 does not support the function of resuming uploads from breakpoints, and all pages and data will be sent each time. If only part of the data is needed, excess bandwidth will be wasted
———————————————

insert image description here

http1.1

insert image description here
insert image description here
insert image description here
insert image description here

http2.0

http2.0 is a safe and efficient next-generation http transport protocol. It is safe because http2.0 is based on the https protocol, and efficient because it uses binary framing for data transmission. Because of these features, the http2.0 protocol is being supported by more and more websites.
insert image description here

insert image description here
insert image description here
insert image description here
insert image description here
insert image description here
insert image description here
insert image description here
insert image description here
insert image description here
insert image description here

Therefore, if there is a packet loss, it will trigger a TCP timeout retransmission, so that all data in the subsequent buffer queue will have to wait for the lost retransmission
insert image description here

the difference

insert image description here

insert image description here
insert image description here
insert image description here
insert image description here

Guess you like

Origin blog.csdn.net/u013400314/article/details/131725671