[Creation wins red envelopes] Nginx four - Nginx's various strategies for load balancing

Series Article Directory

[Nginx 1]——Introduction to Nginx (forward proxy, reverse proxy, load balancing, dynamic and static separation)
[Nginx 2]——Nginx common command configuration file,
how Nginx handles requests
[Nginx 3]——Nginx realizes reverse proxy



foreword

This blog mainly introduces various strategies for Nginx to achieve load balancing, including round robin, least connection, IP hash, weighted round robin, and URL hash.


1. What is Nginx load balancing

Nginx can implement load balancing through reverse proxy, and strategically distribute requests to different back-end servers. Avoid single point of failure, enhance the availability of the entire system, thereby improving the concurrent processing capability of the system.

Reverse proxy implementations in nginx include HTTP load balancing, HTTPS, FastCGI, uwsgi, SCGI, memcached, and gRPC.
To configure load balancing for HTTPS instead of HTTP, simply use "https" as the protocol.
When setting up load balancing for FastCGI, uwsgi, SCGI, memcached, or gRPC, use the fastcgi_pass, uwsgi_pass, scgi_pass, memcached_pass, and grpc_pass directives, respectively.

2. Multiple strategies for load balancing

1. Round Robin: The default strategy is to assign requests to different servers in sequence.
2. Least Connections: Assign the request to the server with the least number of connections.
3. IP Hash (IP Hash): Use the requested IP address as the hash key, and assign it to the corresponding server according to the hash value.
4. Weighted Load Balancing: Assign the request according to the preset weight To different servers, the server with higher weight gets more requests.
5. URL hash (URL Hash): Use the requested URL address as a hash key, and assign it to the corresponding server according to the hash value, so as to ensure that requests with the same URL address are assigned to the same server.

1. Round Robin

In the round-robin algorithm, multiple targets receive requests or tasks sequentially in a fixed order. Whenever a new request or task comes in, the round-robin algorithm assigns it to the next target until all targets have received the request or task, and then starts from the beginning.

To use a round-robin strategy to distribute requests to different servers, you can specify a round-robin strategy in the upstream block:

The code is as follows (example):

upstream backend {
    
    
  server 192.168.60.1;
  server 192.168.60.2;
  server 192.168.60.3;
}

server {
    
    
  listen 80;
  server_name example.com;

  location / {
    
    
    # 指定使用backend代理请求
    proxy_pass http://backend;
  }
}

In this example, the upstream block defines 3 backend servers.
These 3 servers will be load balanced according to the default round robin strategy.
In the server block, request forwarding is configured, and all requests will be proxied to a server in the backend server group.

2. Least Connections

It distributes requests to the server with the least number of connections, thereby balancing the load on the servers. Use the least connections policy to avoid situations where some servers are overloaded.

In Nginx,
the least_conn strategy can be used to achieve load balancing with the least connections. Here is an
example using the least_conn strategy:

The code is as follows (example):

upstream backend {
    
    
    least_conn;
    server 192.168.60.1;
    server 192.168.60.2;
    server 192.168.60.3;
}

server {
    
    
    listen 80;
    server_name example.com;

    location / {
    
    
        proxy_pass http://backend;
    }
}

In the above example, we defined a
server group named backend, using
the least_conn policy to distribute requests to the server with the least number of connections. We can also
set parameters such as weight, maximum number of failures, and failure timeout in each server configuration item, for example:

upstream backend {
    
    
    least_conn;
    server 192.168.60.1 weight=2 max_fails=3 fail_timeout=30s;
    server 192.168.60.2 weight=3 max_fails=2 fail_timeout=20s;
    server 192.168.60.3 max_fails=1 fail_timeout=10s;
}

server {
    
    
    listen 80;
    server_name example.com;

    location / {
    
    
        proxy_pass http://backend;
    }
}

In the above example, we set parameters such as weight, maximum number of failures and failure timeout in each server configuration item. These parameters can be set according to the actual situation to achieve a better load balancing effect.

3. IP hash (IP Hash)

IP Hash (IP Hash) is a load balancing strategy, which distributes the requests of the same client to the same server by hashing the IP address of the client. Using the IP hash strategy can ensure that the request of the same client is always assigned to the same server, thus avoiding problems caused by the out-of-sync state between multiple servers.

It should be noted that when using the IP hash strategy, if a server is added or deleted in the server group, the hash result will change, and the requests previously assigned to the server need to be reassigned. Therefore, it is necessary to consider the dynamic changes of the server group when using the IP hash strategy.

The following is an example Nginx configuration using IP Hash (IP Hash) strategy:

upstream backend {
    
    
    ip_hash;
    server 192.168.60.1;
    server 192.168.60.2;
    server 192.168.60.3;
}

server {
    
    
    listen 80;
    server_name example.com;

    location / {
    
    
        proxy_pass http://backend;
    }
}

In the above configuration, we define a server group named backend, and use the ip_hash strategy to assign requests from the same client to the same server. Then, in a virtual host named example.com, we forward all requests to the backend server group through a reverse proxy.
Assuming that client A's IP address is 192.168.0.1, client B's IP address is 192.168.0.2, and client C's IP address is 192.168.0.3, their distribution of requests may be as follows: Client A's
request It is assigned to server 192.168.60.1;
client B's request is assigned to server 192.168.60.2;
client C's request is assigned to server 192.168.60.3.

Note that if a client's request is assigned to a server, all subsequent requests from this client will be assigned to the same server until the server in the server group changes.

4. Weighted Load Balancing

It distributes the load according to the server's weight value. Each server is assigned a weight value, and servers with higher weight values ​​are assigned more requests.

In Nginx, you can use the weight parameter of the server directive in the upstream module to set the weight value of the server. For example:

upstream backend {
    
    
    server 192.168.60.1 weight=10;
    server 192.168.60.2 weight=5;
    server 192.168.60.3 weight=3;
}
server {
    
    
    listen 80;
    server_name example.com;

    location / {
    
    
        proxy_pass http://backend;
    }
}

In the example above, server 192.168.60.1 has a weight of
10, server 192.168.60.2 has a weight of
5, and server 192.168.60.3 has a weight of 3. When Nginx receives a request, it will distribute the request according to the weight value of the server according to the load factor strategy. Specifically, if the weight value of a server is W, then the server is assigned W times more requests than other servers.

It should be noted that when using the load factor strategy, the performance and load of the server should be accurately estimated in order to assign appropriate weight values ​​to the server. Otherwise, if the weight value is not set properly, it may cause the load of some servers to be too high or too low, thus affecting the performance of the entire system.

5. URL Hash (URL Hash)

URL Hash (URL Hash) is a strategy in Nginx load balancing, which performs a hash operation based on the requested URL and maps the result to the backend server. In this way, requests for the same URL can be assigned to the same backend server, thereby achieving the effect of session persistence.

Here is an example Nginx configuration using a URL hashing strategy:

upstream backend {
    
    
    hash $request_uri;
    server 192.168.60.1 ;
    server 192.168.60.2 ;
    server 192.168.60.3 ;
}

server {
    
    
    listen 80;
    server_name example.com;
    location / {
    
    
        proxy_pass http://backend;
        
    }
}

Here we use the hash command to set the hash algorithm to URL hash, and $request_uri means to use the requested URL as the hash value. Then define 3 backend servers, 192.168.60.1 and 192.168.60.2 and 192.168.60.3 respectively.

It should be noted that when the URL hash strategy is used, if the number of backend servers changes, the hash mapping table will change, which may cause some requests to be redistributed to other servers, which may affect system performance. Therefore, when using URL hashing strategies, trade-offs and choices should be made on a case-by-case basis.

Supplement: nginx health check

The health check of Nginx means that the Nginx server regularly sends check requests to the back-end server to check whether the server can respond normally, thereby judging the health status of the server. Health check is very important for load balancing and high availability, because it can detect faulty servers in time and transfer traffic from faulty servers to normal servers to ensure service reliability and stability.

Nginx provides two methods of health check: health check based on HTTP request health check based on TCP connection

Guess you like

Origin blog.csdn.net/wangwei021933/article/details/129892572