nginx+tomcat for load balancing

When our project was deployed, considering the high availability of the project and the ability to handle greater concurrency, the
project was load balanced.

It uses nginx as a reverse proxy server combined with tomcat. Deploy the project to multiple tomcats separately, so that even if one of the tomcats is down, the
other tomcats can continue to provide services, which is high availability.

Furthermore, the amount of concurrency that one tomcat can handle is about 1500, so you can use multiple tomcats to handle greater
concurrency.

By default, the load balancing strategy adopted by nginx is the polling strategy. For example, 4 requests are sent, and
nginx is followed by 2 tomcats, namely t1 and t2. The first request is sent to t1, the second request is
sent to t2, the third request is sent to t1, and the fourth request is sent to t2. This is the polling strategy.

In addition to the polling strategy, weighted polling and ip_hash load balancing strategies are commonly used.

Weighted polling can be understood in this way. For example, servers in a company have different configurations and different performances. At this time,
you can set the weight of the server with a higher configuration to be higher; the server with a low configuration has a lower
weight , through weight Keywords to achieve. For example, the weight of t1 is set to 3, and the weight of t2 is set to 1.
Then when there are 4 requests coming, t1 processes 3 of them, and t2 processes 1 of them.

ip_hash is to hash the ip address of the client sending the request and bind it to the corresponding tomcat
. In this way, it can be ensured that as long as the requests sent by the client are processed by the fixed tomcat, the
problem of session drift can be solved.

Because nginx uses a polling strategy by default, the problem of session drift will occur.
It can be understood that when the user logs in, the request is sent to t1. After layers of verification, if the
user name and password and other information are determined to be correct, a session will be generated on t1 and the user's information will be stored in the session.
But when the login is successful, the page is redirected, and a new request will be generated. This request may be sent to t2. At
this time, there is no user information in the session on t2. After the interceptor judges, if there is no user in the session Information
is redirected to the login page again, so it will be obvious that the user name and password are correct,
but the login page is redirected to the login page once the login is successful.

In order to solve the session drift, the default polling strategy can be changed to the ip_hash strategy, so that the same
client's request will be fixedly sent to a tomcat, and there will be no session drift problem.

But there is still a problem. If the client and t1 are bound through ip_hash, all subsequent
requests from this client will be sent to t1. If t1 goes down, then new requests will be sent to t2.
However, there is no session information of the user who successfully logged in on t1 on t2, so they will be redirected to the login page,
which is very unfriendly to the user experience.

In our project, we finally solved the problem of session drift through token-based login.

Guess you like

Origin blog.csdn.net/jq1223/article/details/114103957