Nginx (6): Load balancing of nginx configuration instances

1. Achieve the effect

Enter the address http://192.168.16.130/edu/a.html in the address bar of the browser, the load balancing effect, the average is 8080 and 8081 ports.

2. Preparation

(1) Prepare two tomcat servers, one 8080 and one 8081;
(2) In the webapps directory of the two tomcats, create a folder named edu, and create a page a.html in the edu folder for testing .

3. Load balancing configuration

Insert picture description here
Insert picture description here
Save and restart nginx, you can access it with a browser.

4. Nginx allocation server strategy

Load balancing is to distribute the load to different service units, not only to ensure the availability of services, but also to ensure that the response is fast enough to give users a good experience. The rapid growth of traffic and data traffic has given birth to a variety of load balancing products. Many professional load balancing hardware provides good functions, but the price is high, which makes load balancing software very popular, and nginx is one of them. One, under Linux, there are nginx, LVS, Haproxy, etc. services that can provide load balancing services, and nginx provides several distribution strategies:

4.1 Polling (default)

Each request is assigned to different back-end servers one by one in chronological order. If the back-end server is down, it can be deleted automatically.

4.2 weight

weight represents the weight, and the default is 1. The higher the weight, the more clients will be allocated.

Specify the polling probability, the weight is proportional to the access ratio, which is used in the case of uneven back-end server performance. E.g:

upstream server_pool {
    
    
	server  192.168.16.130 weight=10;
	server  192.168.16.131 weight=10;
}

4.3 ip_hash

Each request is allocated according to the hash result of the access ip, so that each visitor has a fixed access to a back-end server, which can solve the session problem. E.g:

upstream server_pool {
    
    
	ip_hash;
	server 192.168.16.130:80;
	server 192.168.16.131:80;
}

4.4 fair (third party)

The request is allocated according to the response time of the back-end server, and the short response time is given priority.

upstream server_pool {
    
    
	server 192.168.16.130:80;
	server 192.168.16.131:80;
	fair;
}

Guess you like

Origin blog.csdn.net/houwanle/article/details/112099276