Link to this article: . . . . , if possible, please read the previous article "nginx Scenario Business Summary (Beginning)"
(13) Load balancing
- polling
http {
upstream myapp1 {
server srv1.example.com;
server srv2.example.com;
server srv3.example.com;
}
server {
listen 80;
location / {
proxy_pass http://myapp1;
}
}
}
Round-robin based on fair scheduling principles, similar to the scheduling rules of rabbitMq. Distribute requests to srv1, srv3, srv3, in turn.
- least connected
upstream myapp1 {
least_conn; //least connection
server srv1.example.com;
server srv2.example.com;
server srv3.example.com;
}
Which server has low load, distribute to this server
- ip_hash (session persistence)
upstream myapp1 {
ip_hash; //ip_hash
server srv1.example.com;
server srv2.example.com;
server srv3.example.com;
}
This is a partial solution to the session sharing problem. Let the access of the same ip always be distributed on a certain server to maintain the consistency of the session.
- weighted
Use weights to encourage nginx to distribute to certain servers with high performance.
upstream myapp1 {
server srv1.example.com weight=3;
server srv2.example.com;
server srv3.example.com;
}
That is, if there are five requests, 3 requests are dispatched to srv1, one request to srv2, and one request to srv3
Other parameters