Several ways of Nginx load balancing

Focus on ip_hash and weighting

Nginx acts as a reverse proxy for back-end web servers ( apache , nginx, tomcat , weblogic), etc.

    Several back-end web servers need to consider file sharing, database sharing, session sharing. File sharing can use nfs, shared storage (fc, ip storage) + redhat GFS cluster file
system , rsync + inotify file synchronization, etc. Small In large-scale clusters, nfs is used more. For content management systems, it is a good choice to use rsync+inotify to synchronize with multiple platforms for content management systems. For
    small-scale clusters, a single high-performance database (eg Zhiqiang dual Quad-core, 32/64/128G memory) is enough. Large-scale clusters may need to consider database clusters. You can use the cluster software provided by mysql, or
you can use keepalived+lvs read-write separation to do Mysql clusters. The
    problem of session sharing is a The big problem, if nginx adopts the ip_hash polling method, each ip will be fixed to the back-end server within a certain period of time, so that we do not have to solve the problem of session sharing. On the contrary,
an ip request is polled and distributed to multiple servers , it is necessary to solve the problem of session sharing, you can use nfs to share the session, write the session into mysql or memcache and other methods, when the machine scale is relatively large
, generally use the session to write into the memcache.


Here we discuss the two load balancing methods of nginx: polling weighted (or unweighted, that is, 1:1 load) and ip_hash (the same ip will be assigned to a fixed backend server to solve the session problem)
This configuration file, we can Write it in nginx.conf (if there is only one web cluster), if there are multiple web clusters, it is best to write it in vhosts, in the form of virtual hosts, here I write
the first configuration in nginx.conf: weighted polling , give weights according to the performance of the server, in this case, 1:2 allocation
 u ps stream lb {

                server 192.168.196.130 weight=1 fail_timeout=20s;
                server 192.168.196.132 weight=2 fail_timeout=20s;
 }

 server {
                listen 80;
                server_name safexjt.com www.safexjt.com;
  index index.html index.htm index.php;
  location / {
                        proxy_pass http://lb;
   proxy_next_upstream http_500 http_502 http_503 error timeout invalid_header;
   include proxy.conf;
                }
 }

The second configuration: ip_hash polling method, the server cannot be weighted

 upstream lb {

                server 192.168.196.130 fail_timeout=20s;
                server 192.168.196.132 fail_timeout=20s;
  ip_hash;
 }

 server {
                listen 80;
                server_name safexjt.com www.safexjt.com;
  index index.html index.htm index.php;
  location / {
                        proxy_pass http://lb;
   proxy_next_upstream http_500 http_502 http_503 error timeout invalid_header;
   include proxy.conf;
                }
 }

Method 2: nginx load balancing implements session paste based on ip_hash


1. Polling (default)
Each request is allocated to different back-end servers one by one in chronological order. If the back-end server is down, it can be automatically eliminated. 

upstream backserver {
server 192.168.0.14;
server 192.168.0.15;
}
2、指定权重
指定轮询几率,weight和访问比率成正比,用于后端服务器性能不均的情况。 

upstream backserver {
server 192.168.0.14 weight=10;
server 192.168.0.15 weight=10;
}
3、IP绑定 ip_hash
每个请求按访问ip的hash结果分配,这样每个访客固定访问一个后端服务器,可以解决session的问题。 

upstream backserver {
ip_hash;
server 192.168.0.14:88;
server 192.168.0.15:80;
}
4、fair(第三方)
按后端服务器的响应时间来分配请求,响应时间短的优先分配。 

upstream backserver {
server server1;
server server2;
fair;
}
5、url_hash(第三方)
按访问url的hash结果来分配请求,使每个url定向到同一个后端服务器,后端服务器为缓存时比较有效。 

upstream backserver {
server squid1:3128;
server squid2:3128;
hash $request_uri;
hash_method crc32;
}
在需要使用负载均衡的server中增加 

proxy_pass http://backserver/;
upstream backserver{
ip_hash;
server 127.0.0.1:9090 down; (down 表示单前的server暂时不参与负载)
server 127.0.0.1:8080 weight=2; (weight 默认为1.weight越大,负载的权重就越大)
server 127.0.0.1:6060;
server 127.0.0.1:7070 backup; (其它所有的非backup机器down或者忙的时候,请求backup机器)
}
max_fails :允许请求失败的次数默认为1.当超过最大次数时,返回proxy_next_upstream 模块定义的错误
fail_timeout:max_fails次失败后,暂停的时间 这个配置在server后面

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325442561&siteId=291194637