tomcat load slag? Ngnix+tomcat+redis realizes dynamic and static separation, load balancing, session sharing

The content of this article is relatively complex, and there are many technical points in the design, so it will not be listed in detail, only the main line process, hoping to bring you ideas

The most familiar server for java programmers is the tomcat server.

As we all know, even if tomcat is optimized again, the load capacity and concurrency capacity of a single server are very weak.

But this does not mean that your ability to run the server host of the tomcat server is like this (that is to say, if your project is built based on tomcat, you can use some means to improve the access capability of the server)

Because a tomcat can bear a limited number of connections at the same time, but there may still be many idle processes on the server host at this time, in order to maximize the utilization of the server host

Distributed server construction mode came into being

Briefly introduce the distributed structure, first look at the picture


In the best-understood language, it is to use a server with super load capacity on a server host to accept all requests, but he does not process every request, his role is to divide these requests into n destinations, put They correspondingly transfer to another server to process specific request actions, which is equivalent to the security check at the airport. Passengers are equivalent to requests from various computers and mobile phones. Their purpose is to enter the waiting hall, but they want to go through the airport. In the case of security check, if there is only one security check channel, you have to wait for the security check one by one, which is very slow, then it becomes n security check channels to do security check together, so that the number of people processed at the same time increases, and the security check is equivalent to the load server, He can offload requests and point to multiple tomcats to increase server access

So what this architecture needs

The first is the nginx server (used to be the load server, that is, the security check)

then tomcat and project

With these two, let's analyze

This structure is

For example, start ngnix on port 80

Then we can start two tomcats on 8080 and 8081 respectively. Normally, this represents two projects.

But the feature of ngnix is ​​that the path of nginx is still used after forwarding the request.

So if there are multiple tomcats, we can further optimize the project's pictures, css, js and other static resources on nginx

Since multiple tomcats are only used to divide the pressure, using a set of static resources can save a lot of space

First look at the configuration of nginx

Installation steps skimming

http {
    #Extension and file type mapping table
    include       mime.types;
    #default type
    default_type application/octet-stream;
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    

    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';

    #log
    access_log /usr/local/var/log/nginx/access.log;
    error_log /usr/local/var/log/nginx/error.log;
    #gzip Compressed Transmission
    gzip on;
    gzip_min_length 1k; #Minimum 1K
    gzip_buffers 16 64K;
    gzip_http_version 1.1;
    gzip_comp_level 6;
    gzip_types text/plain application/x-javascript text/css application/xml 	application/javascript;
    gzip_vary on;
    # load balancing group
    upstream web_app {   
 	server 127.0.0.1:9080 weight=1 max_fails=2 fail_timeout=30s;   
 	server 127.0.0.1:9081 weight=1 max_fails=2 fail_timeout=30s;   
    }
    server {
        listen       80;
        server_name  localhost;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;

        location / {
           proxy_next_upstream http_502 http_504 error timeout invalid_header;   
	    proxy_set_header Host  $host:80;   
	    proxy_set_header X-Real-IP $remote_addr;   
	    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;   
 	    proxy_pass http://web_app;   
  
        }
	#Static resource storage directory (all static resources of the project can be placed here)
	location ~ .*\.(html|htm|gif|jpg|jpeg|bmp|png|ico|txt|js|css)$   
	{   
   
		root /usr/local/etc/nginx/html;  
		expires      3d;   
	}
        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

        # proxy the PHP scripts to Apache listening on 127.0.0.1:80
        #
        #location ~ \.php$ {
        #    proxy_pass   http://127.0.0.1;
        #}

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        #
        #location ~ \.php$ {
        #    root           html;
        #    fastcgi_pass   127.0.0.1:9000;
        #    fastcgi_index  index.php;
        #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
        #    include        fastcgi_params;
        #}

        # deny access to .htaccess files, if Apache's document root
        # concurs with nginx's one
        #
        #location ~ /\.ht {
        #    deny  all;
        #}
    }
    server{
        server_name localhost;
        listen 443 ssl;
        root html;
        location / {
            proxy_next_upstream http_502 http_504 error timeout invalid_header;   
	    proxy_set_header Host  $host:80;   
	    proxy_set_header X-Real-IP $remote_addr;   
	    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;   
 	    proxy_pass http://web_app;  
        }
        ssl on;
        ssl_certificate conf/ssl/client.pem;
        ssl_certificate_key conf/ssl/client.key.unsecure;
    }

    # another virtual host using mix of IP-, name-, and port-based configuration
    #
    #server {
    #    listen       8000;
    #    listen       somename:8080;
    #    server_name  somename  alias  another.alias;

    #    location / {
    #        root   html;
    #        index  index.html index.htm;
    #    }
    #}


    # HTTPS server
    #
    #server {
    #    listen       443 ssl;
    #    server_name  localhost;

    #    ssl_certificate      cert.pem;
    #    ssl_certificate_key  cert.key;

    #    ssl_session_cache    shared:SSL:1m;
    #    ssl_session_timeout  5m;

    #    ssl_ciphers  HIGH:!aNULL:!MD5;
    #    ssl_prefer_server_ciphers  on;

    #    location / {
    #        root   html;
    #        index  index.html index.htm;
    #    }
    #}
    include servers/*;
}

There is a step here to configure the https protocol. I won't say much here. If you are interested, you can search other places for a detailed explanation.

The next step is to create two tomcat servers on the corresponding ports

The project is an ssm background permission system





The specific details of these preparations will not be introduced, and the process will be the mainstay.

At this time, start nginx and tomcat

Can access localhost


we can try to log in

You will find that you cannot log in.

Because the default project's interception system is session-based

Since the server has become a distributed form, the sessions of each tomcat are not interoperable.

So since we are accessing an nginx, we can't feel the difference. In fact, every time we visit, he will forward the request to one of the tomcats in turn.

That is to say, when we log in, we may access tomcat1

Go to tomcat2 when you go to login verification, and then go back to 1, which will bring the situation that the session cannot be saved, and the sessionid will change every time we visit

This way, the login cannot be performed.

So here we need to do another task

It is to find a way to synchronize different sessions together, so that the login state can be maintained across servers

All you need here is redis and springsession

The first is the dependency package

	<dependency>
            <groupId>redis.clients</groupId>
            <artifactId>jedis</artifactId>
            <version>2.8.1</version>
            <type>jar</type>
            <scope>compile</scope>
        </dependency>
        <dependency>
		  <groupId>org.springframework.session</groupId>
		  <artifactId>spring-session-data-redis</artifactId>
		  <version>1.2.1.RELEASE</version>
		</dependency>
		<dependency>
            <groupId>org.springframework.data</groupId>
            <artifactId>spring-data-redis</artifactId>
            <version>1.5.2.RELEASE</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.session</groupId>
            <artifactId>spring-session</artifactId>
            <version>1.3.1.RELEASE</version>
        </dependency>

Then all we need is

Install redis, configure springsession

Install redis this step is omitted

Configured on applicationContext as follows

	<bean id="redisHttpSessionConfiguration"
	      class="org.springframework.session.data.redis.config.annotation.web.http.RedisHttpSessionConfiguration">
	    <property name="maxInactiveIntervalInSeconds" value="600"/>
	</bean>
	
	<bean id="jedisPoolConfig" class="redis.clients.jedis.JedisPoolConfig">
	    <property name="maxTotal" value="100" />
	    <property name="maxIdle" value="10" />
	</bean>
	
	<bean id="jedisConnectionFactory"
	      class="org.springframework.data.redis.connection.jedis.JedisConnectionFactory" destroy-method="destroy">
	    <property name="hostName" value="192.168.50.168"/>
	    <property name="port" value="6379"/>
	    <property name="password" value="" />
	    <property name="timeout" value="3000"/>
	    <property name="usePool" value="true"/>
	    <property name="poolConfig" ref="jedisPoolConfig"/>
	</bean>


The configuration in web.xml is as follows

  <filter>
	  <filter-name>springSessionRepositoryFilter</filter-name>
	  <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class>
  </filter>
  <filter-mapping>
	  <filter-name>springSessionRepositoryFilter</filter-name>
	  <url-pattern>/*</url-pattern>
  </filter-mapping>

After these two places are configured, spring can help us persist the session to redis

In this case, no matter on a server, we can get the current login information in redis

Implemented session sharing


After these two configurations are completed, the login verification and other links are the same as normal.

springSessionRepositoryFilter
This filter will automatically intercept our request and handle the session in its method

So we can operate the session like a single server

In this way, the load problem is solved, and the development can be performed like a single server, thereby maximizing the use of the resources of the server host and improving the load capacity of the project.

I will put the specific code on git when I have time,

git address https://github.com/keaderzyp


Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325572568&siteId=291194637