Nginx configuration and deployment research, Upstream load balancing module

Nginx's HttpUpstreamModule provides simple load balancing for backend servers. One of the simplest upstream is written as follows:

upstream backend {
    server backend1.example.com;
    server backend2.example.com;
    server.backend3.example.com;
}

server {
    location / {
        proxy_pass http://backend;
    }
}

1. Backend server

The back-end server can be set through upstream, and the specified method can be ip address and port, domain name, UNIX socket (socket). Among them, if the domain name can be resolved to multiple addresses, these addresses are used as backend. The following examples illustrate:

 

upstream backend {
    server blog.csdn.net/poechant;
    server 145.223.156.89:8090;
    server unix:/tmp/backend3;
}
 

 

The first backend is specified with the domain name. The second backend is specified with an IP and port number. The third backend is specified with a UNIX socket.

 

2. Load balancing strategy

Nginx provides 3 ways: round robin, user IP hash (client IP) and specified weight.

 

By default, Nginx provides you with round-robin as a load balancing strategy. But that doesn't necessarily satisfy you. For example, if a series of visits within a certain period are initiated by the same user Michael, then the first Michael's request may be backend2, the next request is backend3, then backend1, backend2, backend3... In most application scenarios , this is not efficient. Of course, because of this, Nginx provides you with a way to hash according to the IP of Michael, Jason, David and other messy users, so that each client's access request will be thrown to the same backend server. The specific usage is as follows:

upstream backend {
    ip_hash;
    server backend1.example.com;
    server backend2.example.com;
    server.backend3.example.com;
}

 

In this strategy, the key used for hash operation is the client's class C IP address (the class C IP address is in the range 192.0.0.0 to 223.255.255.255, the first three numbers represent the subnet, the fourth segment IP address class for localhost). This approach ensures that every request from a client will arrive at the same backend. Of course, if the hashed backend is not currently available, the request will be diverted to another backend.

 

Introduce another keyword used in conjunction with ip_hash: down. When a server is temporarily down (down), you can use "down" to mark it, and the marked server will not accept requests for processing. details as follows:

upstream backend {
    server blog.csdn.net/poechant down;
    server 145.223.156.89:8090;
    server unix:/tmp/backend3;
}

 

You can also use the method of specifying weights, as follows:

upstream backend {
    server backend1.example.com;
    server 123.321.123.321:456 weight=4;
}

 

By default, the weight is 1. For the above example, the weight of the first server takes the default value of 1, and the weight of the second server is 4, so it is equivalent to the first server receiving 20% ​​of the requests and the second receiving 80%. It should be noted that weight and ip_hash cannot be used at the same time, the reason is very simple, they are different and conflicting strategies.

 

3. Retry strategy

The maximum number of retries, and the retry interval, can be specified for each backend. The keywords used are max_fails and fail_timeout. As follows:

upstream backend {
    server backend1.example.com weight=5;
    server 54.244.56.3:8081 max_fails=3 fail_timeout=30s;
}

 

In the above example, the maximum number of failures is 3, that is, a maximum of 3 attempts are made, and the timeout period is 30 seconds. The default value of max_fails is 1, and the default value of fail_timeout is 10s. Transmission failure condition, specified by proxy_next_upstream or fastcgi_next_upstream. And can use proxy_connect_timeout and proxy_read_timeout to control the upstream response time.

 

One thing to note is that when there is only one server in the upstream, the max_fails and fail_timeout parameters may not work. The resulting problem is that nginx will only try the upstream request once, and if it fails, the request will be discarded: ( ...The solution, which is tricky, is to write your poor only server several times in the upstream, as follows:

upstream backend {
    server backend.example.com max_fails fail_timeout=30s;
    server backend.example.com max_fails fail_timeout=30s;
    server backend.example.com max_fails fail_timeout=30s;
}

 

4. Standby strategy

Since version 0.6.7 of Nginx, the "backup" keyword can be used. When all non-backup machines are down or busy, only the backup machines marked by backup are used. It must be noted that backup cannot be used with the ip_hash keyword. An example is as follows:

upstream backend {
    server backend1.example.com;
    server backend2.example.com backup;
    server backend3.example.com;
}

 

Reference URL: http://www.linuxde.net/2012/06/11006.html

 

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326990545&siteId=291194637