Several load balancing algorithms and configuration examples of Nginx

Abstract:  Nginx load balancing (working in the seven-layer "application layer") function is mainly realized through the upstream module. Nginx load balancing has the ability to detect the health of the back-end server by default, which is limited to port detection, and in the case of fewer back-end servers The load balancing ability is outstanding. Several load balancing algorithms of Nginx: 1. Round-robin (default): each request is allocated to different back-end servers one by one in chronological order. If a back-end server is down, the faulty machine will be automatically eliminated, so that users cannot access Affected.

Nginx load balancing (working in the seven-layer "application layer") function is mainly implemented through the upstream module. Nginx load balancing has the ability to detect the health of back-end servers by default, which is limited to port detection, and the load is relatively small in the case of back-end servers. Balance ability is outstanding.

Several load balancing algorithms of Nginx:

1. Polling (default): Each request is allocated to different back-end servers one by one in chronological order. If a back-end server goes down, the faulty machine will be automatically eliminated, so that user access will not be affected.

2. Weight: Specifies the polling weight. The larger the weight value, the higher the probability of allocation. It is mainly used when the performance of each server in the backend is unbalanced.

3. ip_hash: Each request is allocated according to the hash result of the access IP, so that each visitor can access a back-end server, which can effectively solve the problem of session sharing in dynamic web pages.

4. fair (third party): a more intelligent load balancing algorithm, this algorithm can intelligently balance the load according to the page size and loading time, that is, allocate requests according to the response time of the back-end server, and the response time is short. distribute. If you want to use this scheduling algorithm, you need the upstream_fair module of Nginx.

5. url_hash (third party): Allocate requests according to the hash result of the access URL, so that each URL is directed to the same back-end server, which can further improve the efficiency of the back-end cache server. If you want to use this scheduling algorithm, you need Nginx's hash package.

In the upstream module, you can specify the IP address and port of the back-end server through the server command, and you can also set the status of each back-end server in load balancing scheduling. The commonly used statuses are as follows:

1. Down: Indicates that the current server does not participate in load balancing temporarily.

2. Backup: A reserved backup machine. When all other non-backup machines fail or are busy, the backup machine will be requested. This machine has the least access pressure.

3. max_fails: The number of failed requests allowed, the default is 1, used together with fail_timeout

4. fail_timeout: The time to suspend the service after max_fails failures, the default is 10s (a server connection fails max_fails times, then nginx will think that the server is not working. At the same time, in the next fail_timeout time, nginx will not Distribute the request to the failed server.)

The following is an example of a load balancing configuration. Only the http configuration section is listed here, and other parts of the configuration are omitted:

http
{
upstream whsirserver {
server 192.168.0.120:80 weight=5 max_fails=3 fail_timeout=20s;
server 192.168.0.121:80 weight=1 max_fails=3 fail_timeout=20s;
server 192.168.0.122:80 weight=3 max_fails=3 fail_timeout=20s;
server 192.168.0.123:80 weight=4 max_fails=3 fail_timeout=20s;
}
server
{
listen 80;
server_name blog.whsir.com;
index index.html index.htm;
root /data/www;

location / {
proxy_pass http://whsirserver;
proxy_next_upstream http_500 http_502 error timeout invalid_header;
}
}
}

At the beginning of upstream load balancing, the name of a load balancer specified by upstream is whsirserver. This name can be defined by yourself, and proxy_pass can be called directly later.

The proxy_next_upstream parameter is used to define the failover strategy. When the backend server node returns errors such as 500, 502 and execution timeout, the request is automatically forwarded to another server in the upstream load balancer to achieve failover.

​​​​​​​

Original link

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326297204&siteId=291194637