Nginx+tomcat realizes load balancing

Step 1: Install and configure nginx, please refer to: http://www.cnblogs.com/klslb/p/8962379.html

Step 2: Prepare three tomcats.

I have prepared three tomcats here and renamed them for easy distinction.

Modify each tomcat startup port. And keep in mind!

In order to distinguish which tomcat is, enter webapps\ROOT, and modify the title of index.jsp.

 

Step 3: Open the nginx installation directory, find nginx.conf in the conf directory, and replace all the following configuration files.

  1  #Users and groups used by Nginx, not specified under window 
   2  #user nobody;
   3  
  4 #Number  of working child processes (usually equal to the number of CPUs or twice as many as CPUs)  
   5  worker_processes 4;
   6  
  7 #Error  log storage path 
   8  #error_log logs/error.log;
   9  #error_log logs/error.log notice;
 10  error_log logs/error.log info;
 11  
12 #Specify  pid storage file  
 13  #pid logs/nginx.pid;
 14  
15  
16  events {
 17 #Maximum      allowed Number of connections
 18      worker_connections 1024;
 19  
20  }
 21  
22  
23 http {
 24     include       mime.types;
 25     default_type  application/octet-stream;
 26 
 27     access_log  logs/access.log;
 28     client_header_timeout  3m;  
 29     client_body_timeout    3m;  
 30     send_timeout           3m;  
 31 
 32     client_header_buffer_size    1k;  
 33     large_client_header_buffers  4 4k;  
 34 
 35     sendfile        on;  
 36     tcp_nopush      on;  
 37     tcp_nodelay     on;
 38 
 39     # 配置负载均衡
 40     upstream localhost{
41          #How does Nginx achieve load balancing? The upstream of Nginx currently supports the following distribution methods
 42 #1, polling (default)
 43 #Each request is allocated to different back-end servers one by one in chronological order. If the back-end server goes down, it can be automatically eliminated.
 44         #2、weight
 45 #Specify the polling probability, the weight is proportional to the access ratio, and it is used in the case of uneven performance of the backend server.
 46         #2、ip_hash
 47 #Each request is allocated according to the hash result of the access ip, so that each visitor fixedly accesses a backend server, which can solve the problem of session.
 48 #3, fair (third party)
 49 #Allocate requests according to the response time of the backend server, and assign priority to those with short response time.
 50 #4. url_hash (third party)
 51 #Assign requests according to the hash result of accessing the url, so that each url is directed to the same back-end server, which is more effective when the back-end server is cached.  
 52         ip_hash;

53      #本机IP+三个tomcat端口号 54  server 192.168.12.209:8081 weight=1; 55 server 192.168.12.209:8082 weight=2; 56 server 192.168.12.209:8083 weight=3; 57 } 58 59 server { 60 listen 80; 61 server_name localhost; 62 63 #charset koi8-r; 64 65 #access_log logs/host.access.log main; 66 67 location / { 68 root html; 69  index index.html index.htm; 70 proxy_connect_timeout 3; #nginx connects to the back-end server connection timeout (proxy connection timeout) 71 proxy_send_timeout 30; #back-end server data return time (proxy send timeout) 72 proxy_read_timeout 30; #connection is successful 73 proxy_pass http://localhost; 74 } 75 76 #error_page 404 /404.html; 77 78 # redirect server error pages to the static page /50x.html 79 # 80 error_page 500 502 503 504 /50x.html; 81 location = /50x.html { 82 root html; 83 } 84 85 # proxy the PHP scripts to Apache listening on 127.0.0.1:80 86 # 87 #location ~ \.php$ { 88 # proxy_pass http://127.0.0.1; 89 #} 90 91 # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 92 # 93 #location ~ \.php$ { 94 # root html; 95 # fastcgi_pass 127.0.0.1:9000; 96 # fastcgi_index index.php; 97 # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; 98 # include fastcgi_params; 99 #} 100 101 # deny access to .htaccess files, if Apache's document root 102 # concurs with nginx's one 103 # 104 #location ~ /\.ht { 105 # deny all; 106 #} 107 } 108 109 110 # another virtual host using mix of IP-, name-, and port-based configuration 111 # 112 #server { 113 # listen 8000; 114 # listen somename:8080; 115 # server_name somename alias another.alias; 116 117 # location / { 118 # root html; 119 # index index.html index.htm; 120 # } 121 #} 122 123 124 # HTTPS server 125 # 126 #server { 127 # listen 443 ssl; 128 # server_name localhost; 129 130 # ssl_certificate cert.pem; 131 # ssl_certificate_key cert.key; 132 133 # ssl_session_cache shared:SSL:1m; 134 # ssl_session_timeout 5m; 135 136 # ssl_ciphers HIGH:!aNULL:!MD5; 137 # ssl_prefer_server_ciphers on; 138 139 # location / { 140 # root html; 141 # index index.html index.htm; 142 # } 143 #} 144 145 }

 

Step 4: Start three tomcats respectively. Restart nginx. Browser access localhost. (Startup command: nginx -s reload)

It can be seen that three tomcats are randomly accessed, and the weight can be configured to specify the polling probability. The weight is proportional to the access ratio, which is used when the performance of the backend server is uneven. priority access

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324947563&siteId=291194637