nginx pressure measurement error caused 502 no live upstreams while connecting to upstream solve

To an interface pressure measurement system's limit, along with concurrent rose, nginx began 502 no live upstreams while connecting to upstream of error, to maintain the maximum amount of concurrent period of time, found that 502 call interface has been returned, that could not have been found nginx survival of the back-end.

By tracking the port, we found nginx create a large number of connections with back-end. This is obviously not caused by the use of http1.1 long connection. So add keepalive configuration in the upstream.

upstream yyy.xxx.web{
    server 36.10.xx.107:9001;
    server 36.10.xx.108:9001;

    keepalive 256;
}
server {
    ···
    location /zzz/ {
        proxy_pass http://yyy.xxx.web;
        ···   
    }
}

Official documents according to the instructions of: opening the connection pool parameters between the upstream server, a value of the maximum number of connections that can be held per nginx worker, the default is not set, i.e. when the side as a client keepalive nginx not active.

By default, Nginx short access back-end connections are used (HTTP1.0), a request comes, Nginx open a new port and back-end connection is established, the request to end the connection recovery. If the http 1.1 long connection configuration, then Nginx will be a long connection to maintain back-end connection, if concurrent requests exceeds the keepalive Specifies the maximum number of connections, Nginx will start a new connection to forward the request, the new connection is closed after completion of the request, and the new connection is established long connection.

FIG achieving the principle nginx upstream keepalive long link.

First, each process requires a connection pool, there are long connection between multiple processes is no need to share the connection pool. Once the connection is established to the back-end server, then the current request after the connection does not close the connection immediately, but to save run out of connections in a keepalive connection pool inside, every time you need to establish a connection when the back, just from this connection pooling inside to find, if it finds a suitable connection, then it can be directly connected with this, you do not need to re-create the socket or initiate connect (). This will not only save the handshake to establish a connection time consuming, and also avoid slow start TCP connection. If the connection pool keepalive not find the right connection, it would follow the original steps to re-establish the connection. I have not seen nginx lookup code available connections in the connection pool, but I wrote redis, mysqldb connection pooling code, the logic should be the same. Who is who pop, run out and then push in, so only time O (1). 

Note that: I am new configuration after my nginx1.12.0 version, again measured pressure, 502 the problem persists after upgrading to version 1.16.0, 502 problem-solving. The reason is nginx1.12.0 version does not support long connection configuration.

In addition, (there is a firewall between the two machines that is) if nginx server where the back-end services and establish a connection to the server on not on the same network segment, also need to pay attention to the impact on the long connection of the firewall.

 

 

参考:http://xiaorui.cc/2016/06/26/%E8%AE%B0%E4%B8%80%E6%AC%A1%E5%8E%8B%E6%B5%8B%E5%BC%95%E8%B5%B7%E7%9A%84nginx%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1%E6%80%A7%E8%83%BD%E8%B0%83%E4%BC%98/

Guess you like

Origin www.cnblogs.com/zjfjava/p/10909087.html