Nginx builds tomcat cluster configuration

What is a tomcat cluster?
Use nginx to offload requests, assign requests to different tomcats for processing, reduce the load of each tomcat, and improve the response speed of the server.
write picture description here
Purpose To
achieve a high-performance load balancing tomcat cluster.
Tools
nginx and tomcat
implementation steps
1. Download nginx and tomcat (decompress two tomcats)
2. Modify the startup port number of tomcat respectively
3. Modify the default index.jsp in tomcat to distinguish tomcat
4. Configure nginx.conf


worker_processes  1;   #工作进程的个数,一般与计算机的cpu核数一致

events {
    worker_connections  1024;   #单个进程最大连接数(最大连接数=连接数*进程数)
}


http {
    include       mime.types;   #文件扩展名与文件类型映射表
    default_type  application/octet-stream;   #默认文件类型

    sendfile        on;   #开启高效文件传输模式,普通应用设为 on,如果用来进行下载等应用磁盘IO重负载应用,可设置为off。

    keepalive_timeout  65;   #长连接超时时间,单位是秒

    gzip  on;   #启用Gzip压缩

    #tomcat集群
    upstream  myapp {   #tomcat集群名称 
        server    localhost:8080;   #tomcat1配置
        server    localhost:8181;   #tomcat2配置
    }   

    #nginx的配置
    server {
        listen       9090;   #监听端口,默认80
        server_name  localhost;   #当前nginx域名

        location / {
            proxy_pass http://myapp;
            proxy_redirect default;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}

Core configuration:
write picture description here
5. Start nginx
6. Visit http://localhost:9090
and refresh several times to see different tomcats, which completes the load balancing cluster configuration.
nginx's load balancing strategy:
1. Polling (default)
Each request is allocated to the back-end server one by one in chronological order. If the back-end server is down, it can be automatically eliminated.

upstream backserver { 

server 192.168.0.14; 

server 192.168.0.15; 

} 

2. Specify the weight
Specify polling probability, and the weight is proportional to the access ratio, which is used when the performance of the back-end server is uneven.

upstream backserver { 

server 192.168.0.14 weight=10; 

server 192.168.0.15 weight=10; 

} 

3. ip binding (ip_hash)
Each request is allocated according to the hash result of accessing the ip, so that each visitor has a fixed access to a server, which can solve the session problem.

upstream backserver { 

ip_hash; 

server 192.168.0.14:88; 

server 192.168.0.15:80; 

} 

4. Fair (third party)
allocates requests according to the response time of the server, and priority is given to those with short response time.

upstream backserver { 

server server1; 

server server2; 

fair; 

} 

5. url_hash (third party)
is allocated according to the hash result of the accessed url, so that each url is directed to the same server.

upstream backserver { 

server squid1:3128; 

server squid2:3128; 

hash $request_uri; 

hash_method crc32; 

} 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324745325&siteId=291194637