Five methods of load balancing

Load balancing is realized by the principle of reverse proxy
Five common methods of load balancing:

1. Polling (default)
Each request is allocated to different back-end servers one by one in chronological order. If the back-end server is down, it can be automatically eliminated.

upstream backserver {
    
    
    server 192.168.12.1;
    server 192.168.12.2;
} 

2. Weight polling (the higher the weight, the greater the chance of entry)
specify the polling probability, weight is proportional to the access ratio, used for uneven back-end server performance
.
(In the following example, they are 40% and 60% respectively.)

upstream backserver {
    
    
    server 192.168.0.14 weight=4;
    server 192.168.0.15 weight=6;
}

3. There is a problem with the above two methods, that is, in the load balancing system, if a user logs in on a certain server, when the user requests the second time, because we are a load balancing system, every request will be Relocate to one of the server clusters, then users who have logged in to a certain server and relocate to another server will lose their login information, which is obviously inappropriate.

We can use the ip_hash instruction to solve this problem. If the customer has already visited a certain server, when the user visits again, the request will be automatically located to the server through the hash algorithm.

Each request is allocated according to the hash result of the access ip, so that each visitor has a fixed access to a back-end server, which can solve the session problem.

upstream backserver {
    
    
    ip_hash;
    server 192.168.0.2:88;
    server 192.168.0.3:80;
}

4. Fair (third-party)
allocates requests according to the response time of the back-end server, and priority is given to those with a short response time.

upstream backserver {
    
    
    server server1;
    server server2;
    fair;
}

5. url_hash (third party)
allocates requests according to the hash result of the access URL, so that each URL is directed to the same back-end server, which is more effective when the back-end server is a cache.

upstream backserver {
    
    
    server squid1:3128;
    server squid2:3128;
    hash $request_uri;
    hash_method crc32;
}

Finally:
the status of each device is set to:
1.down means that the server before the order does not participate in the load temporarily
2.weight defaults to 1. The larger the weight, the greater the weight of the load.
3.max_fails: The default number of allowable request failures is 1. When the maximum number of times is exceeded, the error defined by the proxy_next_upstream module will be returned.
4.fail_timeout: The pause time after max_fails failures.
5.backup: When all other non-backup machines are down or busy, request the backup machine. So this machine will have the lightest pressure.

Guess you like

Origin blog.csdn.net/m0_43413873/article/details/106646088