Nginx seven load balancing several scheduling algorithms

Nginx is a lightweight, high-performance web server, but also a very good load balancer and reverse proxy server. Because of the strong positive support matching rules, static and dynamic separation, URLrewrite function and simple installation and configuration of network stability relies very small advantages, it is commonly used to use for the seven load balancing. In the case of hardware is not bad, it can usually stable support tens of thousands of concurrent connections, the hardware performance is good enough, and the system kernel parameters and Nginx configuration optimization can even reach more than 100,000 concurrent.

The following are seven Nginx as load balancing several common scheduling algorithms and applicable business scenarios

1, the polling (default scheduling)

Features: each request individually assigned to different backend servers in chronological sequence.
 Applicable business scenarios: the back-end server hardware configuration exactly the same performance, use no special business requirements.
{backendserver upstream
Server 192.168.0.14:80 max_fails fail_timeout = 2 = 10s;
Server = 2 fail_timeout 192.168.0.15:80 max_fails = 10s;
}

2, WRR

Features: polling a probability, weight values (weights), and access is proportional to the ratio, by weight proportion of the user requesting allocation weights.
 Applicable business scenarios: a back-end server hardware processing power of the uneven situation.
{backendserver upstream
Server 192.168.0.14:80 weight = 2. 5 max_fails = fail_timeout = 10s;
Server 192.168.0.15:80 weight = 10 = 2 fail_timeout max_fails = 10s;
}

3、ip_hash

Features: Each request is assigned according ip hash result of the visit, so that each visitor to access a fixed back-end server, you can solve the problem of maintaining session session.
 Applicable business scenarios: for applications requiring account login system, session connection to keep business.
{backendserver upstream
ip_hash;
Server = 2 fail_timeout 192.168.0.14:80 max_fails = 10s;
Server = 2 fail_timeout 192.168.0.15:80 max_fails = 10s;
}

4, a minimum number of connections least_conn

Features: Press nginx reverse connection between the proxy and the number of back-end servers, the least number of connections assigned priority.

Applicable business scenarios: for the client and back-end servers need to maintain a long connection business.
{backendserver upstream
least_conn;
Server = 2 fail_timeout 192.168.0.14:80 max_fails = 10s;
Server = 2 fail_timeout 192.168.0.15:80 max_fails = 10s;
}

5, fair (need to install third-party modules compiled ngx_http_upstream_fair_module)

Features: back-end servers according to the response time of the allocation request, allocating a short response time priority.
 Applicable business scenario: There are certain requirements for access to business responsiveness.
{backendserver upstream
Fair;
Server = 2 fail_timeout 192.168.0.14:80 max_fails = 10s;
Server = 2 fail_timeout 192.168.0.15:80 max_fails = 10s;
}

6, url_hash (need to install third-party modules compiled ngx_http_upstream_hash_module)

Features: according to the results of hash access request url to allocate the same url access to the same back-end server.
 Applicable business scenarios: for the back-end server is more effective when the cache server.
{backendserver upstream
Server 192.168.0.14:80 max_fails fail_timeout = 2 = 10s;
Server 192.168.0.15:80 max_fails fail_timeout = 10s = 2;
the hash $ REQUEST_URI;
}

Guess you like

Origin www.linuxidc.com/Linux/2019-11/161254.htm