NGINX Learning (Five) - nginx load balancing

Nginx load balancing is a common feature, which means that load balancing across multiple operating units allocated to execution, such as: Web server, FTP server, enterprise critical application servers and other mission-critical servers, so as to work together to complete the task. In simple terms it is that when there are two or more sets of servers, distributed according to a random rule request to the specified server process, load balancing configuration generally need to configure the reverse proxy, to jump through the reverse proxy load balancer. The Nginx currently supports three kinds of native load balancing strategy, there are two kinds of popular third-party policy. 
1, RR (default)

每个请求按时间顺序逐一分配到不同的后端服务器,如果后端服务器down掉,能自动剔除。
简单配置
  upstream test {
      server localhost:8080;
      server localhost:8081;
  }
  server { listen 81; server_name localhost; client_max_body_size 1024M; location / { proxy_pass http://test; proxy_set_header Host $host:$server_port; } } 

Equipped with two servers, of course, it is in fact one, but just not the same port, and 8081 server does not exist, that is not to visit, but we visit  http: // localhost  time, there will not be problem, by default jump to http: // localhost: 8080  specifically because the state Nginx server will automatically determine if the server is not accessible (server hung up), it will not jump to this server, so it avoids a server linked to the situation affect the use, since Nginx default RR policy, we do not need so many more settings.

2, the weight 
polling a probability proportional to weight ratio and access, for the case where unevenness backend server performance. E.g

upstream test {
     server localhost:8080 weight=9;
     server localhost:8081 weight=1; }

So there is usually only 10 times 1 will have access to 8081, while nine will have access to 8080.

3, ip_hash 
2 ways above has a problem, that is, when the next request to the request may be distributed to another server, when our program is not stateless (using the session save data), this time there a very big problem, such as to save the login information to the session, then jump to another server when you need to log in again, so many times we only need a client access to a server, then you need to use iphash a, iphash assigned to each request by hash result access ip so that each visitor to access a fixed back-end server, can solve the problem of session.

upstream test {
     ip_hash;
     server localhost:8080;
     server localhost:8081;
 }

4, fair (third party) 
by the response time of the allocation request to the backend server, a short response time priority allocation.

upstream backend { 
    fair; 
    server localhost:8080;
    server localhost:8081;
} 

5, url_hash (third party) 
by hash results to access url allocation request, each url directed to the same back-end server, the back end server is effective when the cache. Hash join statement in the upstream, server statement can not be written other parameters such as weight, hash_method hash algorithm is used.

upstream backend { 
    hash $request_uri; 
    hash_method crc32; 
    server localhost:8080;
    server localhost:8081; } 

5 or more load-balancing each applicable use in different situations, so you can choose which policy to use mode according to the actual situation, but fair and url_hash need to install third-party modules to use, because the paper describes Nginx can do, so Nginx Installation This article describes the module will not be tripartite.

Guess you like

Origin www.cnblogs.com/gllegolas/p/11724652.html