Use nginx to achieve load balancing 2

1. Distribute according to IP 2. Distribute
according to URL
3. According to weight
4. According to response time

Several ways of NGINX load balancing to distribute requests

1. Polling (default)
Each request is allocated to different back-end servers one by one in chronological order. If the back-end server is down, it can be automatically eliminated.
2. The weight
specifies the polling probability, and the weight is proportional to the access ratio, which is used when the back-end server performance is uneven.
3.
Each request of ip_hash is allocated according to the hash result of the access ip, so that each visitor has fixed access to a back-end server, which can solve the session problem.
4. Fair (third-party)
allocates requests according to the response time of the back-end server, and priority is given to the short response time.
5. url_hash (third party)
allocates requests according to the hash result of the access URL, so that each URL is directed to the same back-end server, which is more effective when the back-end server is a cache.

upstream www.test1.com {  
    ip_hash;  
    server 172.16.125.76:8066 weight=10;  
    server 172.16.125.76:8077 down;  
    server 172.16.0.18:8066 max_fails=3 fail_timeout=30s;  
    server 172.16.0.18:8077 backup;  
} 

According to the server's own performance differences and functions, different parameter controls can be set.
Down means that the load is
too heavy or does not participate in the load. If the weight is too large, the greater the load will be. The
backup server will be requested when other servers are
backed up or down. max_fails will be suspended or the request will be transferred to other servers
. After pause time

Guess you like

Origin blog.csdn.net/weixin_43452467/article/details/109197560