Article Directory
1. Reasons for the combination of Nginx+Tomcat load balancing
1.1 The principle of Nginx to achieve load balancing
Nginx implements load balancing through a reverse proxy to realize that the Nginx server is used as the front end, the Tomcat server is used as the back end, and web page requests are forwarded by the Nginx service. But instead of forwarding all web requests, static page requests are processed by the Ncinx server itself, and dynamic page requests are forwarded to the back-end Tomcat server for processing. It is understood that Tomcat is a lightweight application server, and the number of acceptable visits may be insufficient, so we need multiple Tomcat servers. And Tomcat's concurrency processing capability is weak (about one-sixth of Nginx's), so when proxying in the Nginx direction is needed, reasonable call allocation should be made.
1.2 Main configuration items for Nginx to achieve load balancing
upstream service pool name { }
Function: configure the backend server pool to provide response data
proxy_pass http:// service pool name
Function: configure the server processing that forwards the access request to the backend server pool
1.3 Advantages of the combination of Nginx+Tomcat load balancing
Nginx static processing advantages: the efficiency of Nginx in processing static pages is much higher than that of Tomcat. If the request volume of Tomcat is 1000 times, the request volume of Nginx is 6000 times. The throughput of Tomcat per second is 0.6M, and the throughput per second of Nginx The capacity of Nginx to handle static resources is 3.6M, which is 6 times that of Tomcat
The principle of dynamic and static separation: the server receives requests from the client, including both static resources and dynamic resources. The static resources are served by Nginx, and the dynamic resources are forwarded to the backend by Nginx.
1.4 Experimental Design of Nginx+Tomcat Load Balancing
Experimental requirements: There is one nginx server and two tomcat servers in a company. It is required to deploy user access services. Static resources are processed by Nginx itself, and jsp dynamic resources are handed over to tomcat server for processing, and the effect of load balancing is achieved.
Experimental deployment diagram:
2. Dynamic and static separation deployment
Nginx server: 192.168.81.131:8080
Tomcat server 1: 192.168.81.129:8080
Tomcat server 2: 192.168.81.130:8080 192.168.81.130:8081
2.1 Deploy TOMCAT backend server
1. Deploy 2 Tomcat application servers
systemctl stop firewalld
setenforce 0
tar zxvf jdk-8u91-linux-x64.tar.gz -C /usr/local/
vim /etc/profile
export JAVA_HOME=/usr/local/jdk1.8.0_91
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:${JRE_HOME}/bin:$PATH
source /etc/profile
tar zxvf apache-tomcat-8.5.16.tar.gz
mv /opt/apache-tomcat-8.5.16/ /usr/local/tomcat
/usr/local/tomcat/bin/shutdown.sh
/usr/local/tomcat/bin/startup.sh
netstat -ntap | grep 8080
2. Dynamic and static separation configuration
(1) Tomcat1 server configuration
mkdir /usr/local/tomcat/webapps/test
vim /usr/local/tomcat/webapps/test/index.jsp
<%@ page language="java" import="java.util.*" pageEncoding="UTF-8"%>
<html>
<head>
<title>JSP test1 page</title> #指定为 test1 页面
</head>
<body>
<% out.println("动态页面 1,http://www.test1.com");%>
</body>
</html>
vim /usr/local/tomcat/conf/server.xml
#由于主机名 name 配置都为 localhost,需要删除前面的 HOST 配置
<Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false">
<Context docBase="/usr/local/tomcat/webapps/test" path="" reloadable="true">
</Context>
</Host>
/usr/local/tomcat/bin/shutdown.sh
/usr/local/tomcat/bin/startup.sh
(2) Tomcat2 server configuration
mkdir /usr/local/tomcat/tomcat1/webapps/test /usr/local/tomcat/tomcat2/webapps/test
vim /usr/local/tomcat/tomcat1/webapps/test/index.jsp
<%@ page language="java" import="java.util.*" pageEncoding="UTF-8"%>
<html>
<head>
<title>JSP test2 page</title> #指定为 test2 页面
</head>
<body>
<% out.println("动态页面 2,http://www.test2.com");%>
</body>
</html>
vim /usr/local/tomcat/tomcat1/conf/server.xml
#删除前面的 HOST 配置
<Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false">
<Context docBase="/usr/local/tomcat/tomcat1/webapps/test" path="" reloadable="true" />
</Host>
/usr/local/tomcat/tomcat1/bin/shutdown.sh
/usr/local/tomcat/tomcat1/bin/startup.sh
vim /usr/local/tomcat/tomcat2/webapps/test/index.jsp
<%@ page language="java" import="java.util.*" pageEncoding="UTF-8"%>
<html>
<head>
<title>JSP test3 page</title> #指定为 test3 页面
</head>
<body>
<% out.println("动态页面 3,http://www.test3.com");%>
</body>
</html>
vim /usr/local/tomcat/tomcat2/conf/server.xml
#删除前面的 HOST 配置
<Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false">
<Context docBase="/usr/local/tomcat/tomcat2/webapps/test" path="" reloadable="true" />
</Host>
/usr/local/tomcat/tomcat2/bin/shutdown.sh
/usr/local/tomcat/tomcat2/bin/startup.sh
2.2 Deploy nginx server
1. Install nginx
vim /etc/yum.repos.d/nginx.repo
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
enabled=1
yum install nginx -y
vim etc/nginx/conf.d/default.conf
upstream backend_server {
server 192.168.81.129:8080 weight=1;
server 192.168.81.130:8080 weight=1;
server 192.168.81.130:8081 weight=1;
}
server {
listen 80;
server_name localhost;
#access_log /var/log/nginx/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location ~* .*\.jsp$ {
proxy_pass http://backend_server;
proxy_set_header HOST $host;
proxy_set_header X-REAL-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
2.3 Install nginx dynamic server
192.168.81.131:80
192.168.81.132:80
Install nginx with yum
cd /etc/yum.repos.d/
vim nginx.repo
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
enabled=1
Modify the default.conf file
vim /etc/nginx/conf.d/default
upstream backend_server {
server 192.168.81.129:8080 weight=1;
server 192.168.81.130:8080 weight=1;
server 192.168.81.130:8081 weight=1;
}
server {
listen 80;
server_name localhost;
#access_log /var/log/nginx/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location ~* .*\.jsp$ {
proxy_pass http://backend_server;
proxy_set_header HOST $host;
proxy_set_header X-REAL-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
Copy this file to another host
Install client
192.168.81.133
vim /usr/local/nginx/conf/nginx.conf
events {
use epoll;
worker_connections 1024;
}
stream {
upstream nginx_server {
server 192.168.81.132:80 weight=1;
server 192.168.81.133:80 weight=1;
}
server {
listen 80;
proxy_pass nginx_server;
}
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 8080;
server_name www.kgc.com;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root html;
index index.html index.htm;
}
test connection