For an http request, who will disconnect the TCP connection first? Under what circumstances does the client break first, and under what circumstances does the server break first?

We have 2 internal http services (nginx):

201: The service deployed by this server is account.api.91160.com, which is called by the front-end page;

202: The service deployed by this server is hdbs.api.91160.com, which is called by the front-end page;

 

Recently, it was found that the number of TIME_WAIT in the network connections of these two servers is very different. The TIME_WAIT of 201 is about 20000+, and the TIME_WAIT of 202 is about 1000, the difference is 20 times; And there is no difference in service mode, why is there such a big difference in the number of connections?

 

Find the reason later: because the calling programs of these two modules are written by different teams, and the calling methods are different, resulting in one is the caller (client, PHP program) actively disconnecting, and the other is the callee (server 201 , 202) Active disconnection; because TIME_WAIT is generated on the side that actively disconnects, it leads to a high TIME_WAIT count for one server and a low TIME_WAIT count for one server;

 

This is a detail, for an http request, who will disconnect the TCP connection first? Under what circumstances does the client break first, and under what circumstances does the server break first?

After Baidu, I found the reason, mainly related to the difference in maintaining connection between http1.0 and http1.1 and the parameters such as connection, content-length, transfer-encoding in the http header;

 

 The following content is reproduced: (http://blog.csdn.net/wangpengqi/article/details/17245349 )

Of course, in nginx, long connections are also supported for http1.0 and http1.1. What is a long connection? We know that the http request is based on the TCP protocol. Then, before the client initiates the request, it needs to establish a TCP connection with the server, and each TCP connection requires three handshakes to determine. The network between the servers is a bit short, and the three interactions will consume more time, and the three interactions will also bring network traffic. Of course, when the connection is disconnected, there will also be four interactions, which of course are not important to the user experience. The http request is a request-response type. If we can know the length of each request header and response body, then we can perform multiple requests on one connection. This is the so-called long connection, but the premise is that we first The length of the request header and response body must be determined. For requests, if the current request requires a body, such as a POST request, then nginx requires the client to specify the content-length in the request header to indicate the size of the body, otherwise a 400 error is returned. That is to say, the length of the request body is determined, so what about the length of the response body? Let's first look at the determination of the length of the response body in the http protocol:
1. For the http1.0 protocol, if there is a content-length header in the response header, the length of the body can be known by the length of the content-length. When the terminal receives the body, it can receive data according to this length. After receiving it, it means that the request is completed. If there is no content-length header, the client will continue to receive data until the server actively disconnects, indicating that the body has been received.
2. For the http1.1 protocol, if the Transfer-encoding in the response header is chunked transmission, it means that the body is streaming output, the body will be divided into multiple blocks, and the start of each block will identify the length of the current block. , in this case, the body does not need to be specified by length. If it is a non-chunked transmission and there is content-length, the data is received according to the content-length. Otherwise, if it is non-chunked and has no content-length, the client receives the data until the server actively disconnects.

From the above, we can see that except for http1.0 without content-length and http1.1 non-chunked without content-length, the length of the body is known. At this point, when the server finishes outputting the body, it can consider using a long connection. Whether a long connection can be used is also conditional. If the connection in the client's request header is close, it means that the client needs to close the long connection. If it is keep-alive, the client needs to open the long connection. If there is no connection header in the client's request, then according to the protocol, if If it is http1.0, the default is close, and if it is http1.1, the default is keep-alive. If the result is keepalive, then nginx will set the keepalive attribute of the current connection after outputting the response body, and then wait for the next request from the client. Of course, it is impossible for nginx to wait all the time. If the client does not send data all the time, does it keep occupying this connection? So when nginx sets keepalive to wait for the next request, it also sets a maximum waiting time. This time is configured by the option keepalive_timeout. If the configuration is 0, it means that keepalive is turned off. At this time, the http version is either 1.1 or 1.0, whether the client's connection is close or keepalive, it will be forced to close.

If the final decision of the server is to open keepalive, then the http header of the response will also contain the connection header field, whose value is "Keep-Alive", otherwise it is "Close". If the connection value is close, then after nginx responds to the data, it will actively close the connection. Therefore, for nginx with a large amount of requests, turning off keepalive will eventually generate more sockets in the time-wait state. Generally speaking, when a client needs to visit the same server multiple times, the advantage of opening keepalive is very large, such as a picture server, usually a web page will contain many pictures. Turning on keepalive also greatly reduces the number of time-waits.

 

Summary: (don't consider keepalive)

http1.0  

With content-length, the length of the body can be known. When the client receives the body, it can receive data according to this length. Once accepted, the request is complete. The client actively calls close to enter the wave four times.

Without content-length, the length of the body is unknown, and the client accepts data until the server actively disconnects

 

http1.1

With the content-length body length, it can be seen that     the client actively disconnects

With Transfer-encoding: chunked body will be divided into multiple blocks, the beginning of each block will identify the length of the current block, body does not need to be specified by content-length. But you can still know the length of the body, the client actively disconnects

Without Transfer-encoding: chunked and without content-length         , the client receives data until the server actively disconnects.

 

That is: if there is a way to know the length sent by the server, the client will disconnect first. If you don't know, keep receiving data. Know that the server is disconnected.

 

 

http://www.cnblogs.com/web21/p/6397525.html

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=327042582&siteId=291194637