nginx proxy + load balancing multiple tomcat servers

Earlier, I have written articles about installing nginx service on centos , and installing and deploying multiple tomcat application servers on centos . These two articles are deployed separately. There is no linkage between nginx and tomcat . Below I will record the configuration and deployment of nginx proxy and load balancing multiple tomcats .

 

Ready to work:

  1. The nginx server can run normally alone

  2. Multiple tomcats can run separately (one tomcat represents multiple nodes of the same application)

     

    Simulated Scenario:

    In our work, we often encounter high concurrency of users. In this case, if a single application server is deployed, the performance may be overwhelmed, which may lead to the downtime of the application server. In order to avoid the downtime of a single server for various reasons , we look for a clustering way to solve this problem. For example, we can deploy the same application to multiple servers, and then distribute user requests to these application servers through a unified proxy server; there are three advantages to doing so, the first is to directly reduce the concurrency pressure of a single server; the second is The reliability of the application server's external services is improved, because the downtime of any one of the multiple servers will not affect the user's use; the third point is to facilitate the upgrade of the application, which can be said to be an upgrade with zero disconnection. It is good to upgrade multiple servers in order.

     

    Nginx , tomcat load balancing configuration:

    The main thing is to change the nginx.conf file in the conf directory of nginx . The specific configuration is as follows:

    user root root;   

    worker_processes 2;   

    pid /opt/nginx/logs/nginx.pid;   

    error_log  /opt/nginx/logs/error.log;

    worker_rlimit_nofile 102400;   

    events   

    {   

    use epoll;   

    worker_connections 102400;   

    }   

    http   

    {   

      include       mime.types;   

      default_type  application/octet-stream;   

      fastcgi_intercept_errors on;   

      access_log    /opt/nginx/logs/access.log;

      charset  utf-8;   

      server_names_hash_bucket_size 128;   

      client_header_buffer_size 4k;   

      large_client_header_buffers 4 32k;   

      client_max_body_size 300m;   

      sendfile on;   

      tcp_nopush     on;   

          

      keepalive_timeout 60;   

          

      tcp_nodelay on;   

      client_body_buffer_size  512k;   

        

      proxy_connect_timeout    5;   

      proxy_read_timeout       60;   

      proxy_send_timeout       5;   

      proxy_buffer_size        16k;   

      proxy_buffers            4 64k;   

      proxy_busy_buffers_size 128k;   

      proxy_temp_file_write_size 128k;   

          

      gzip on;   

      gzip_min_length  1k;   

      gzip_buffers     4 16k;   

      gzip_http_version 1.1;   

      gzip_comp_level 2;   

      gzip_types       text/plain application/x-javascript text/css application/xml;   

      gzip_vary on;   

      log_format  main  '$http_x_forwarded_for - $remote_user [$time_local] "$request" '  

                  '$status $body_bytes_sent "$http_referer" '  

                  '"$http_user_agent"  $request_time $remote_addr';   

                      

    upstream web_app {   

     server 192.168.32.128:8080 weight=1 max_fails=2 fail_timeout=30s;   

     server 192.168.32.128:8081 weight=1 max_fails=2 fail_timeout=30s;   

    }   

        

    server {   

        listen 80;   

        server_name  chenfeng.test.com;   

        index index.jsp index.html index.htm;   

        root  /jeff/zx/;   

        location /   

        {   

        proxy_next_upstream http_502 http_504 error timeout invalid_header;   

        proxy_set_header Host  $host;   

        proxy_set_header X-Real-IP $remote_addr;   

        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;   

        proxy_pass http://web_app;   

        expires      3d;   

        }   

      }   

    }

    可以看到这个配置upstream web_app 中我将已经安装配置好的tomcat服务器进行了指向。

    至此我已经将nginxtomcat进行了负载均衡配置,接下来就是重新启动nginx服务器

    可以看到当我们输入原来的localhost请求地址时,返回的界面已经改成了两个tomcat中的一个,而不是原来的nginx的欢迎界面了。

    集成前localhost返回界面如下图:



      

    集成后localhost返回的是tomcat1,如下图所示:




     
      

    此时我继续刷新一下浏览器,可以看到返回的是tomcat2了,如下图所示:



      

    至此验证了我的配置是成功的,nginx已经反向代理了tomcat1tomcat2应用服务,并且在我刷新时自动切换tomcat1tomcat2,这就代表了nginx已经正确的按照配置进行了负载均衡工作。

     

    此时我还想继续验证nginx服务器在其中一个tomcat应用服务器宕机的情况下会出现什么情况,这里宕机我就直接将tomcat1的服务器停掉

     

     

    可以看到我将tomcat1停掉后单独访问tomcat1的服务地址localhost:800时已经不能正常访问了,如下图所示:



      

    此时我访问nginx代理服务器地址localhost时,看看还能不能正常访问,可以看到此时nginx还是能正常访问的,只不过现在每次刷新地址都返回的是tomcat2的页面了,nginx自动将宕机的tomcat1剔除了负载均衡的队列了,如下图:



      

    这个时候如果我又把tomcat1启动,会出现什么情况呢



      

    这个时候可以看到单独访问tomcat1地址时,已经能够正常访问了,如下图:



      

    这时我在访问nginx地址,并多刷几次,可以考到tomcat1有被自动加到了nginx负载均衡中,请求返回都在tomcat1tomcat2之间进行切换了,至此我这里成功的验证了各种情况,可以满足我前面所说的三个主要应用模拟情景了。

     

    附言:nginx.conf文件中的配置项中文释义可能没有进行说明,我这里直接引用别人的文章进行说明好了,有兴趣的可以看下面两个地址的内容,基本说本很清楚明了了。

    http://blog.csdn.net/tjcyjd/article/details/50695922

    http://www.cnblogs.com/sayou/p/3319635.html

     

     

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326413642&siteId=291194637