nginx之upstream

ngx_http_upstream_module

Used to define multiple servers as server groups, which are called by proxy_pass and fastcgi_pass instructions.

upstream name

Define the back-end server group, the server can listen on different ports. The default is the wrr algorithm. Can only be used in http


    upstream websrv {
    
    
	server 11.2.2.228:80;
	server 11.2.3.63:80;  
}

[root@nginx a]# curl http://www.a.com
11.2.2.228
[root@nginx a]# curl http://www.a.com
11.2.3.63


# 在禁止一台服务之后,nginx会一直将请求反代到正常的主机上去
[root@localhost ~]# systemctl stop httpd
[root@localhost ~]# curl http://www.a.com/index.html
11.2.3.63


两个主机也可以监听的端口不同。
upstream websrv {
    
    
	server 11.2.3.63:80;
	server 11.2.2.228:8080;
}

[root@nginx a]# curl http://www.a.com
11.2.3.63
[root@nginx a]# curl http://www.a.com
11.2.2.228

server address [parameters]

Define the server addresses and their parameters in upstream

There are several parameters

parameter Explanation
weight=number Set the weight, the default is 1
max_conns=number The maximum number of concurrent connections to the back-end server, the default value is 0, unlimited
max_fails=number If the number of failed attempts exceeds the number specified here, the server will be designated as unavailable. Default is 1
fail_timeout=time The connection timeout period when the back-end server is marked as unavailable, the default is 10
backup Mark the server as "standby", that is, enable when all servers are unavailable
down Mark as unavailable, cooperate with ip_hash to achieve gray release

PS: fail_timeout specifies the timeout period when connecting to the backend server, max_fails is the number of attempts that should be tried after fail_timeout fails.

Example:

[root@nginx conf.d]# cat a.com.conf
upstream websrv {
    
    
	server 11.2.3.63:80 weight=2 max_fails=3 fail_timeout=20s;
	server 11.2.2.228:8080 weight=1 max_fails=3 fail_timeout=20s;
}


ip_hash

Source address hash scheduling algorithm. Access is scheduled to the backend server scheduled for the first access

Use with down. Realize gray release.

[root@nginx conf.d]# cat a.com.conf
upstream websrv {
    
    
	ip_hash;
	server 192.168.199.103:80 weight=2 max_fails=3 fail_timeout=20s;
	server 192.168.199.132:8080 weight=1 max_fails=3 fail_timeout=20s;
}
[root@localhost ~]# curl http://www.a.com/index.html
192.168.199.103
[root@localhost ~]# curl http://www.a.com/index.html
192.168.199.103

least_conn

The least connection scheduling algorithm, when the server has different weights, it is wlc, when all back-end hosts have the same number of connections, wrr is used, which is suitable for long connections.

Example:

[root@localhost ~]# for i in {1..100};do sleep 1; curl http://www.a.com/index.html;done
192.168.199.103
192.168.199.103
192.168.199.132
192.168.199.103
192.168.199.103
192.168.199.132


hash key [consistent]

Request scheduling is implemented based on the hash table of the specified key. The key here can be directly text, variable or a combination of the two
: classify the request, the same type of request will be sent to the same upstreamserver, using the consistent parameter, will use ketama Consistent hash algorithm, suitable for use when the backend is a Cache server (such as varnish)

hash key algorithm

For example, perform hash calculation on the requested uri to greatly improve the cache hit rate.
The algorithm is to perform a hash calculation on the requested uri to obtain a value, and then divide it by the weight of the backend server to take the modulus. The number obtained after taking the modulus matches the weight set by the server. For example, the weight of the back-end server is 1 and 2, then the hash value is divided by 3 to take the modulus. If the modulus is 1, it is sent to the cache of 1. 2 is sent to the cache server corresponding to 2. This greatly improves the cache hit rate

However, there is a problem with this. If the back-end cache server is increased or decreased, the existing cache will not be hit, resulting in failure to pass the cache and all the pressure on the back-end server host. This is called hash penetration.

hash consensus algorithm

Take $request_urian example.
Let's create a ring first, there is on this ring 1,2,3...2^32-1. Then add a random number to the back-end cache server IP according to the weight to generate a new virtual IP. And calculate the hash of the new ip address. Then 2\^32take the modulus. With the modulus obtained in this way, the back-end cache server can be placed on the corresponding number on the ring. Then, the requested uri is hashed and the 2\^32modulo is taken. The cache server closest to it is the server that provides the cache. But this will cause a problem. There may be many request_urimodulus values ​​close to the same cache server on the ring , causing a cache server to be too stressed. This is called hash ring offset.

The solution to this is to perform a large number of multiplications before taking the modulus of the cache server to make it evenly distributed in every place on the ring.

I will add a picture to explain it later.

Guess you like

Origin blog.csdn.net/qq_44564366/article/details/105626088