Article Directory
1 Introduction
Server: 1
service: one
port: three (one external interface (80), two internal service interfaces (3002, 3022))
nginx: one copy
Idea: To configure the load, nginx can start normally regardless of whether our back-end service is started. Then I can think that nginx is currently listening on two ports 3002 and 3022. If our 3002 service is down, then nginx will forward our request to 3022 at this time. Then when we update the background service, we only need to start another port after the success, close the first port to let the user update without feeling, run a project in the background, update a port inside and outside, stop the start
2, nginx configuration load balancing
Nginx currently supports 6 methods of load balancing strategies:
polling (default method)
weight (weight method)
ip_hash (according to ip distribution method)
least_conn (least connection method)
fair (response time method provided by a third party)
url_hash (passed by a third party ) Based on URL allocation method)
server {
listen 80;
server_name 111.11.111.21;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://real_server;
}
}
// 权重
upstream real_server {
server 127.0.0.1:3002 weight=1; #轮询服务器和访问权重
server 127.0.0.1:3022 weight=2;
}
// 最少连接方式,把请求发给链接数最少的后端服务器
//upstream real_server {
// least_conn;
// server [IP地址]:[端口号] weight=2;
// server [IP地址]:[端口号];
//}
3. Start nginx
Refer to the next article: docker deploy nginx
4. Test log
Start port 3002:
Start port 3022:
Start port 3002, 3022:
5. Remarks
1. Nginx load balancing has not been studied in depth. If it is placed in production, you need to consider it.
2. The upstream is outside the server.
3. Since there is no actual configuration for production, I hope you can make comments when you refer to it. I will make changes.
Provide a group: 807770565, welcome everyone to come in and chat