1.nginx failover
Failover: In the nginx load balancing configuration, if the request forwarded to a server fails, nginx will automatically forward it to another server and try again. Failover depends on the type of request, and GET requests are automatically forwarded and retried. Requests such as POST and PUT will not be forwarded if they have been sent to a server and are processed abnormally; if the server cannot be connected to, nginx will forward them to other servers for processing. So don't use GET request to do insert and update business, otherwise there will be problems.
http {
#application
cluster load balancing upstream com.zhujn.hot {#two
tomcat, weight: weight
server 192.168.1.100:8081 weight = 3;
server 192.168.1.101:8081 weight = 2;
}
server { #virtual server, default Port: 80
proxy_read_timeout 10s;
proxy_next_upstream error timeout;
#Failover condition proxy_next_upstream_tries 3; #Number of retries to prevent infinite retry in large clusters
proxy_next_upstream_timeout 60s; #Maximum time for retry
location / { #Reverse
proxy, point to load balancing Configuration name
proxy_pass http://com.zhujn.hot
}
}
}
2. Avalanche
In a high-concurrency scenario with load balancing among multiple servers, due to the characteristics of nginx failover, assuming that 3000 of 10,000 requests from a certain server failed to be processed, nginx will divide these 3000 requests to other servers. Putting other servers down, nginx will forward the request on the problematic machine to the remaining servers, which will cause a snowball effect and cause the entire cluster to crash.
Solution: Add a fuse mechanism and configure as follows.
upstream com.zhujn.hot {
#max_fails
server 192.168.1.100:8081 max_fails=10 fail_timeout=60s;
server 192.168.1.101:8081 max_fails=10 fail_timeout=60s;
}