Several ways of Nginx load balancing

Focus on ip_hash and weighting

Nginx acts as a reverse proxy for back-end web servers ( apache , nginx, tomcat , weblogic), etc.

    Several back-end web servers need to consider file sharing, database sharing, session sharing. File sharing can use nfs, shared storage (fc, ip storage) + redhat GFS cluster file
system , rsync + inotify file synchronization, etc. Small In large-scale clusters, nfs is used more. For content management systems, it is a good choice to use rsync+inotify to synchronize with multiple platforms for content management systems. For
    small-scale clusters, a single high-performance database (eg Zhiqiang dual Quad-core, 32/64/128G memory) is enough. Large-scale clusters may need to consider database clusters. You can use the cluster software provided by mysql, or
you can use keepalived+lvs read and write separation to do Mysql clusters. The
    problem of session sharing is a The big problem, if nginx adopts the ip_hash polling method, each ip will be fixed to the back-end server within a certain period of time, so that we do not have to solve the problem of session sharing. On the contrary,
an ip request is polled and distributed to multiple servers , it is necessary to solve the problem of session sharing, you can use nfs to share the session, write the session to mysql or memcache, etc. When the scale of the machine is relatively large
, it is generally used to write the session into the memcache.


Here we discuss the two load balancing methods of nginx: polling weighted (or unweighted, that is, 1:1 load) and ip_hash (the same ip will be assigned to a fixed backend server to solve the session problem)
This configuration file, we can Write it in nginx.conf (if there is only one web cluster), if there are multiple web clusters, it is best to write it in vhosts, in the way of virtual host, here I write
the first configuration in nginx.conf: weighted polling , give weights according to the performance of the server, in this case, 1:2 allocation
 u ps stream lb {

                server 192.168.196.130 weight=1 fail_timeout=20s;
                server 192.168.196.132 weight=2 fail_timeout=20s;
 }

 server {
                listen 80;
                server_name safexjt.com www.safexjt.com;
  index index.html index.htm index.php;
  location / {
                        proxy_pass http://lb;
   proxy_next_upstream http_500 http_502 http_503 error timeout invalid_header;
   include proxy.conf;
                }
 }

The second configuration: ip_hash polling method, the server cannot be weighted

 upstream lb {

                server 192.168.196.130 fail_timeout=20s;
                server 192.168.196.132 fail_timeout=20s;
  ip_hash;
 }

 server {
                listen 80;
                server_name safexjt.com www.safexjt.com;
  index index.html index.htm index.php;
  location / {
                        proxy_pass http://lb;
   proxy_next_upstream http_500 http_502 http_503 error timeout invalid_header;
   include proxy.conf;
                }
 }

Method 2: nginx load balancing implements session paste based on ip_hash


1. Polling (default)
Each request is allocated to different back-end servers one by one in chronological order. If the back-end server is down, it can be automatically eliminated. 

upstream backserver {
server 192.168.0.14;
server 192.168.0.15;
}
2. Specify the weight to
specify the polling probability, and the weight is proportional to the access ratio, which is used when the performance of the backend server is uneven. 

upstream backserver {
server 192.168.0.14 weight=10;
server 192.168.0.15 weight=10;
}
3. IP binding ip_hash
Each request is allocated according to the hash result of the access ip, so that each visitor has a fixed access to a back-end server, which can be solved session problem. 

upstream backserver {
ip_hash;
server 192.168.0.14:88;
server 192.168.0.15:80;
}
4. Fair (third party)
allocates requests according to the response time of the backend server, and those with short response times are given priority. 

upstream backserver {
server server1;
server server2;
fair;
}
5. url_hash (third party)
allocates requests according to the hash result of accessing the url, so that each url is directed to the same back-end server, which is more effective when the back-end server is cached. 

upstream backserver { server
squid1:3128;
server squid2:3128;
hash $ request _uri; hash_method 
crc32;

proxy_pass http://backserver/;
upstream backserver{
ip_hash;
server 127.0.0.1:9090 down; (down means that the server before the order does not participate in the load temporarily)
server 127.0.0.1:8080 weight=2; (weight defaults to 1.weight The larger the load, the greater the weight of the load)
server 127.0.0.1:6060;
server 127.0.0.1:7070 backup; (when all other non-backup machines are down or busy, request the backup machine)
}
max_fails : the number of allowed request failures The default is 1. When the maximum number of times is exceeded, the error defined by the proxy_next_upstream module will be returned.
fail_timeout: After max_fails failures, the pause time is configured behind the server

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325442524&siteId=291194637