Nginx (reverse proxy, load balancing, dynamic and static separation, tomcat cluster session sharing)

Table of contents

1. What is Nginx?

1.1 What is a reverse proxy

1.2 What is load balancing

        The data traffic is distributed to multiple servers for execution, reducing the pressure on each server, and multiple servers work together to complete the task, thus improving the data throughput.

​edit  

1.3 What is dynamic and static separation

1.4 Install Nginx

1.5 Startup and shutdown of Nginx

start up

closure

Load configuration files dynamically (restart) 

1.6 Configuration file introduction (nginx.conf) 

1.7 Nginx proxy tomcat

 Proxy multiple tomcats (load balancing)

​edit

1.8 Six load balancing strategies of Nginx

1.9 Session sharing of Tomcat cluster

 2.0 static and dynamic separation

 2.1 nginx solves the port problem


1. What is Nginx?

        Nginx is a lightweight web server , reverse proxy server and email (IMAP/POP3) proxy server

        Features: reverse proxy load balancing dynamic and static separation

1.1 What is a reverse proxy

        Proxy services can be simply divided into forward proxy and reverse proxy:

Forward proxy:

        The so-called forward proxy is that the proxy server replaces the visitor [user] to access the target server [server]

reverse proxy: 

        The so-called reverse proxy is to replace the server to accept the user's request , obtain the user's required resources from the target server, and then send it to the user

1.2 What is load balancing

        The data traffic is distributed to multiple servers for execution, reducing the pressure on each server, and multiple servers work together to complete the task, thus improving the data throughput.

 

1.3 What is dynamic and static separation

        Dynamic and static separation: put static resources on the reverse proxy server to save users' access time

        Nginx classifies and forwards client requests, and requests for static resources are processed by the static resource server (web server).

Dynamic resource requests are processed by tomcat (web application server), which can improve the performance of the entire service

  • web application server, such as:

    • tomcat

    • resin

    • jetty

  • web server, such as:

    • Apache server

    • Nginx

    • IIS

 Distinction: The web server cannot parse pages such as jsp, but can only process static resources such as js, css, and html.

 Concurrency: The concurrency capability of web servers is much higher than that of web application servers.

1.4 Install Nginx

Download the nginx installation package

Install nginx dependencies

yum -y install gcc pcre pcre-devel zlib zlib-devel openssl openssl-devel

Unzip the installation package  

tar -zxvf nginx-1.10.0.tar.gz

Configure nginx installation package

 cd nginx-1.10.0

//Install nginx to the /usr/java/nginx directory

./configure --prefix=/usr/local/nginx

compile and install

 make && make install

1.5 Startup and shutdown of Nginx

start up

#There is a sbin directory under the nginx directory, and there is an nginx executable program under the sbin
directory./nginx 

closure

 ./nginx -s stop

Load configuration files dynamically (restart) 

 #You can update the configuration file without closing nginx./nginx
-s reload

1.6 Configuration file introduction (nginx.conf) 

#user  nobody;
#工作进程
worker_processes  1;

events {
    #连接池连接数
    worker_connections  1024;
}
#请求方式
http {
    #媒体类型
    include       mime.types;
    #默认媒体类型 二进制
    default_type  application/octet-stream;
    #上传文件
    sendfile        on;
    #超时时间
    keepalive_timeout  65;

    #gzip  on;
    #服务器配置
	server {
        #监听端口
        listen       80;
        #监听域名
        server_name  localhost;
        #请求头信息
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Server $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        #请求映射规则,/代表所有请求路径
        location / {
             #请求转发地址
             #root html;
			 proxy_pass http://manage.powershop.com:8080;
             #欢迎页
             #index  index.html index.htm;
             #转发连接超时时间
			proxy_connect_timeout 600;
             #转发读取超时时间
			proxy_read_timeout 600;
        }
    }
}

1.7 Nginx proxy tomcat

 Install two tomcats in linux and modify the port number

 Proxy a tomcat (reverse proxy)

 Modify the nginx/conf/nginx.conf file:

 location / {              #Request forwarding address              root html;              proxy_pass http://127.0.0.1:8080;              #Welcome page              index index.html index.htm;         }





 Proxy multiple tomcats (load balancing)

1. Add an upstream to the http node

2. Modify the reverse proxy under location /

1.8 Six load balancing strategies of Nginx

load balancing strategy illustrate
polling default
weight weight method
ip_hash According to the ip allocation method
least_conn By number of connections
fair by response time
url_hash Assign by URL

 Weights

upstream server_list{

        server localhost:8080 weight=5;

        server localhost:8090 weight=1;

}

ipash

upstream server_list{

        ip_hash

        server localhost:8080;

        server localhost:8090;

}

1.9 Session sharing of Tomcat cluster

 1. Add ip_hash to the upstream of nginx (this is not session sharing. Personal understanding is a tricky way, and after one side of the server goes down, the login status will be lost.) 2. The session sharing method that comes with the tomcat
 cluster (When I tried it personally, I confirmed that the tomcat cluster in this way needs to be deployed on the same server, so that high availability cannot be achieved after the server is down, but it is indeed very convenient. The scalability is not high, and it can be used as a small cluster 3.
 Tomcat cluster and redis realize session sharing (highly recommended)

 The first type (ip_hash above)

 the second

 Enter the installation path of tomcat, find the conf file and modify conf/server.xml.

 Add the following code to <Engine name="Catalina" defaultHost="localhost"> in the configuration file

 

<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"

channelSendOptions="8">

<Manager className="org.apache.catalina.ha.session.DeltaManager"

expireSessionsOnShutdown="false"

notifyListenersOnReplication="true"/>

<Channel className="org.apache.catalina.tribes.group.GroupChannel">

<Membership className="org.apache.catalina.tribes.membership.McastService"

address="228.0.0.4"

port="45564"

frequency="500"

dropTime="3000"/>

<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"

address="auto"

port="4000"

autoBind="100"

selectorTimeout="5000"

maxThreads="6"/>

<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">

<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>

</Sender>

<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>

<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>

</Channel>

<Valve className="org.apache.catalina.ha.tcp.ReplicationValve"

filter=""/>

<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>

<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"

tempDir="/opt/tomcat/tmp/war-temp/"

deployDir="/opt/tomcat/tmp/war-deploy/"

watchDir="/opt/tomcat/tmp/war-listen/"

watchEnabled="false"/>

<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>

<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/> 

</Cluster>

After saving, modify the deployed project and add tags in web.xml

<distributable/>

Restart the tomcat cluster to realize session sharing, please pay attention! ! The tomcat cluster needs to be on the same server , otherwise session sharing cannot be achieved.

third

1.redis配置(192.168.159.131:16300)(v2.8.3)

2.tomcat配置

tomcat1(192.168.159.130:8081)

tomcat2(192.168.159.130:8082)

3.nginx安装在192.168.159.131。

       首先,是配置tomcat,使其将session保存到redis上。有两种方法,也是在server.xml或context.xml中配置,不同的是memcached只需要添加一个manager标签,而redis需要增加的内容如下:(注意:valve标签一定要在manager前面。)

<Valve className="com.radiadesign.catalina.session.RedisSessionHandlerValve" />
<Manager className="com.radiadesign.catalina.session.RedisSessionManager"
         host="192.168.159.131"
         port="16300" 
         database="0" 
         maxInactiveInterval="60"/>
其次,配置nginx,用于测试session保持共享。

upstream  redis.xxy.com  {
      server   192.168.159.130:8081;
      server   192.168.159.130:8082;
}

log_format  www_xy_com  '$remote_addr - $remote_user [$time_local] $request '
               '"$status" $body_bytes_sent "$http_referer"' 
               '"$http_user_agent" "$http_x_forwarded_for"';

server
{
      listen  80;
      server_name redis.xxy.com; 

      location / {
               proxy_pass        http://redis.xxy.com;
               proxy_set_header   Host             $host;
               proxy_set_header   X-Real-IP        $remote_addr;
               proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
      }

      access_log  /data/base_files/logs/redis.xxy.log  www_xy_com;
}
最后,将你的应用放到两个tomcat中,并依次启动redis、tomcat、nginx。访问你的nginx,可以发现两个tomcat中的session可以保持共享了。
如果tomcat配置中,将manager放在server.xml中,那么使用maven做热部署时,会发生失败。所以,推荐放在context.xml中。

 2.0 static and dynamic separation

Create static resources

Create a new images folder in the virtual machine and upload pictures

 

 Configure the nginx.conf of nginx

location ~* \.(gif|jpg|png|jpeg)$ {
        root /usr/upload/images;
    }

 test

http://192.168.202.129/atm.jpg

 2.1 nginx solves the port problem

example:

        For example, the address accessed by one of our local projects is: 127.0.0.1:8080

        We can use the following settings to access www.powershop.com

        1. Modify local files

        C:/Windows/System32/drivers/etc/hosts

         2. Set the proxy ip and port number in the nginx.conf file in nginx

server {
        listen       80;
        server_name  localhost;
        
        location / {

            #If you are using nginx on linux, here the ip is set to the local ip
            proxy_pass http://127.0.0.1:8080;
            proxy_connect_timeout 600;
            proxy_read_timeout 600;
        }
    }

         3. Process flow

        1. The browser is ready to initiate a request to visit http://powershop.com , but domain name resolution is required

        2. Because we have set it in the hosts file, the local domain name resolution will be prioritized, and the obtained ip address is:

127.0.0.1

        3. The request is sent to the parsed ip, and port 80 is used by default: http://127.0.0.1:80

The local nginx has been listening to port 80, so capture this request

        4. Reverse proxy rules are configured in nginx to proxy powershop.com to 127.0.0.1:8080, so the request is forwarded

        5. The background management system gets the 127.0.0.1:8080 request and processes it, and returns the response to nginx after completion

        6. nginx returns the result to the browser

 

Guess you like

Origin blog.csdn.net/m0_71560190/article/details/126497541