Load Balancing
Increase the number of servers, then distribute requests to each server, and the original request for the case to focus on a single server instead distribute requests across multiple servers, the load is distributed to different servers, what we call the load balanced.
Achieve results:
Browser address bar enter the address: http: localhost / vod / a.html, load balancing effect, the average forward to 8080 and 8082
Ready to work:
Two tomcat:
- tomcat9 8082 port
- tomcat8 8080 port
Respectively, the new vod folder in webapps, create a new file a.html. Random content, can be distinguished enough.
Configuration Nginx:
nginx.conf:
Block was added at http:
Plus forwarding rule name server inside:
test:
Enter the browser: http: //localhost/vod/a.html
First:
second (refresh):
Next constantly refresh the results: two servers in turn access.
Load balancing strategy
1. Polling (default):
Each request individually assigned to a different time order back-end server, if the back-end server is down, can be automatically removed.
2. weight:
weight represents the weight, the default is 1, the more the higher weights are assigned clients.
Polling a probability proportional to weight ratio and access, for the case where unevenness backend server performance.
For example:
the result browser to visit 6 times as follows: 8082 was four times, 8080 was 2 times.
3. ip_hash:
Each request is assigned according ip hash result of the visit, so that each visitor to access a fixed back-end server, you can solve the problem of session .
E.g:
Visit: http: //localhost/vod/a.html every 8080 port.
4. Fair (third party):
According to the response time of the allocation request to the backend server, a short response time priority allocation. (Fair module needs to be installed)
E.g: