Nginx Load Balancing Demo

demand:

tomcat1:192.168.2.149:8081

tomcat2:192.168.2.149:8082

nginx:192.168.2.111:80

Two tomcat server ports are 8081,8082, the webapps directory has a test project, which has a index.html file, which were written <h1> 8081 </ h1>, <h1> 8082 </ h1>

nginx load balancing, when a user visits "http://192.168.2.111:80/test", the request is forwarded to the 8081 and 8082 an average of two servers

step:

1. Direct access to two tomcat, tomcat ensure a successful start can be a normal visit

2. Modify nginx.conf file, restart nginx configuration to ensure the entry into force

[root@centos ~]# vi /usr/local/nginx/conf/nginx.conf
[root@centos ~]# /usr/local/nginx/sbin/nginx -s reload

Nginx80 port access request, will be forwarded to mysever (name Custom)

3. browser access http://192.168.2.111/test/ , each request will be forwarded to an average of 2 tomcat


Above to achieve a simple load balancing, using the default polling policies, namely access 8081--8082--8081--8082 ...

Several nginx allocation strategy provides:

Polling (default)

Each request individually assigned chronologically to a different server, if the server is down, automatically removed.

2.weight

It represents the weight, defaults to 1, the more the higher weights are assigned clients.

3.ip_hash

Each request is assigned according ip hash result of the visit, so that every visitor a fixed server session can solve the problem.

4.fair (third party, you need to install third-party module)

By server allocation request response time, a short response time priority allocation.

Guess you like

Origin www.cnblogs.com/ddstudy/p/12560557.html