nginx's upstream currently supports 5 ways of distribution

The upstream of nginx currently supports 5 ways of allocation 

1. Polling (default) 

Each request is allocated to different back-end servers one by one in chronological order. If the back-end server is down, it can be automatically eliminated. 

2. Weight 
specifies the polling probability, which is proportional to the access ratio, and is used when the performance of the back-end server is uneven. 
For example: 
upstream bakend { 
server 192.168.0.14 weight=10; 
server 192.168.0.15 weight=10; 


3. Each request of ip_hash 
is allocated according to the hash result of accessing the ip, so that each visitor has a fixed access to a back-end server, which can solve the session The problem. 
For example: 
upstream bakend { 
ip_hash; 
server 192.168.0.14:88; 
server 192.168.0.15:80; 


4. Fair (third party) 
allocates requests according to the response time of the back-end server, and priority is given to those with short response time. 
upstream backend { 
server server1; 
server server2; 
fair; 


5. url_hash (third party) 

Allocate requests according to the hash result of accessing URLs, so that each URL is directed to the same back-end server, which is more effective when the back-end server is cached. 

Example: add a hash statement to the upstream, other parameters such as weight cannot be written in the server statement, hash_method is the hash algorithm used 

upstream backend { 
server squid1:3128; 
server squid2:3128; 
hash $request_uri; 
hash_method crc32; 


tips: 

upstream bakend{#Define the IP and device status of the load balancing device 
ip_hash; 
server 127.0.0.1:9090 down; 
server 127.0.0.1:8080 weight=2; 
server 127.0.0.1:6060; 
server 127.0.0.1:7070 backup; 

In Add
proxy_pass http://bakend/  to the server that needs to use load balancing  ;

the status of each device is set to: 
1.down means that the server before the single does not participate in the load temporarily 
2.weight defaults to 1. The larger the weight, the weight of the load the bigger. 
3. max_fails : The default number of allowable requests to fail is 1. When the maximum number of times is exceeded, the error defined by the proxy_next_upstream module is returned 
4.fail_timeout: The time to pause after max_fails failures. 
5.backup: When all other non-backup machines are down or busy, request the backup machine. So this machine will be the least stressful. 

Nginx supports setting up multiple groups of load balancing at the same time for use by servers that are not used. 

If client_body_in_file_only is set to On, you can record the data from the client post to a file for debugging 
. client_body_temp_path Set the directory of the record file. You can set up to 3 levels of directory 

locations to match URLs. You can redirect or perform new proxy load balancing.

To use nginx for load balancing, first define a set of backend servers for load balancing in the configuration file, for example:
upstream backend {
  server 192.168.1.11;
  server 192.168.1.12;
  server 192.168.1.13;
}

The syntax of the server directive is server name [parameters], where name is the server name, which can be a domain name, ip or unix socket, or a port, for example:
server 192.168.1.11:8080;

The parameters available for the server command are:

weight - set the weight of the server, the default value is 1, the larger the weight value, the greater the probability of the server being accessed, for example,  server 192.168.1.11 weight=5;

max_fails和fail_timeout —— 这俩是关联的,如果某台服务器在fail_timeout时间内出现了max_fails次连接失败,那么nginx就会认为那个服务器已经挂掉,从而在 fail_timeout时间内不再去查询它,fail_timeout的默认值是10s,max_fails的默认值是1(这意味着一发生错误就认为服务器挂掉),如果把max_fails设为0则表示把这个检查取消。
举个例子:server 192.168.1.11 max_fails=3 fail_timeout=30s; 这表示,如果服务器192.168.1.11在30秒内出现了3次错误,那么就认为这个服务器工作不正常,从而在接下来的30秒内nginx不再去访问这个服务器。

down —— 表示该服务器已经停用,例如server 192.168.1.11 down;

backup —— 表示该服务器是备用服务器,只有其它后端服务器都挂了或者很忙才会访问到。

关于upstream的更多信息请参考 http://wiki.nginx.org/NginxHttpUpstreamModule

 

http://blog.chinaunix.net/uid-20662363-id-3049712.html

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326578931&siteId=291194637