Simple configuration of Nginx load balancing

  • Under the http node, add the upstream node.

 

upstream book {
    server 127.0.0.1:8081;# max_fails=2 fail_timeout=600s; #After 2 failed requests, do not request this server within 600 seconds
    server 127.0.0.1:8082;# max_fails=2 fail_timeout=600s;
}

 

 

  • Configure proxy_pass in the location node under the server node as: http:// + upstream name, ie http://book
#Get static resources from nginx's html folder, don't request the backend
location ~ .*\.(gif|jpg|jpeg|png|bmp|ico|swf|js|css)$ {
    root ./html;
    expires 1d;
}

#Configure reverse proxy to book
location / {
    proxy_pass http://book;
    proxy_connect_timeout 1; #Request one of the servers, try another server directly after timeout within 1 second
    proxy_read_timeout 1; #Request one of the servers, try another server directly after timeout within 1 second
    proxy_send_timeout 1; #Request one of the servers, try another server directly after timeout within 1 second
}

  

  • The load balancing is now preliminarily completed. The upstream is loaded according to the polling (default) method, and each request is allocated to different back-end servers one by one in chronological order. If the back-end server is down, it can be automatically eliminated. Although this method is simple and inexpensive. But the disadvantages are: low reliability and unbalanced load distribution. Applicable to image server clusters and pure static page server clusters. In addition, upstream has other allocation strategies, which are as follows: weight (weight) specifies the polling probability, and weight is proportional to the access ratio, which is used when the performance of the backend server is uneven. As shown below, the access ratio of 127.0.0.1:8082 is twice as high as that of 127.0.0.1:8081.
upstream book {
    server 127.0.0.1:8081 weight=5;
    server 127.0.0.1:8082 weight=10;
}

 

  • Each request is allocated according to the hash result of the access ip, so that each visitor can access a back-end server fixedly, which can solve the problem of session.
upstream book {
    ip_hash;
    server 127.0.0.1:8081;
    server 127.0.0.1:8082;
}

 

  • fair (third party) allocates requests according to the response time of the backend server, and the shortest response time is given priority. Similar to the weight allocation strategy.
upstream book {     
    server 127.0.0.1:8081;
    server 127.0.0.1:8082;
    fair;
}

 

  • url_hash (third party) allocates requests according to the hash result of accessing urls, so that each url is directed to the same back-end server, which is more effective when the back-end server is cached. Note: Add a hash statement to the upstream, and other parameters such as weight cannot be written in the server statement. The hash_method is the hash algorithm used.
upstream book {
    server 127.0.0.1:8081;
    server 127.0.0.1:8082;
    hash $request_uri;
    hash_method crc32;
}

The upstream can also set status values ​​for each device. The meanings of these status values ​​are as follows:

down means that the current server does not participate in the load temporarily.

The weight defaults to 1. The larger the weight, the greater the weight of the load.

max_fails : The number of times to allow requests to fail defaults to 1. When the maximum number of times is exceeded, the error defined by the proxy_next_upstream module is returned.

fail_timeout : The time to pause after max_fails failures.

backup: When all other non-backup machines are down or busy, request the backup machine. So this machine will be the least stressful.

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326350403&siteId=291194637