Tomcat cluster introduction

1. Tomcat cluster introduction

In the actual production environment, the load capacity or concurrent capacity of a single Tomcat server is around four to five hundred. In most
cases, as the business grows and the number of visits increases (concurrency exceeds four to five hundred), a single Tomcat server cannot
bear it. At this time, it is necessary to organize multiple Tomcat servers to share the load.

Therefore, in the production environment, some of them will use the stand-alone deployment method. This deployment method will be seen in relatively small companies
, and most of them will use multi-machine deployment. The specific Tomcat deployment architecture is as follows:

  1. When a single Tomcat server is deployed, it is called stand-alone deployment
    . Tomcat runs independently and directly accepts user requests.

  2. A single Tomcat server plus Nginx or Httpd as a reverse proxy
    Reverse proxy, running on a single machine, provides an Nginx as a reverse proxy, which can provide static
    responses from nginx and dynamic jsp proxy to Tomcat, such as LNMT or LAMT architecture

    • LNMT:Linux + Nginx + MySQL + Tomcat
    • LAMT:Linux + Apache(Httpd)+ MySQL + Tomcat
  3. Use Nginx to reverse proxy multiple Tomcat servers
    . Prepend an Nginx to do reverse proxy and load balancing scheduling for multiple Tomcat instances.
    Purely dynamic pages deployed on Tomcat are more suitable

    • LNMT:Linux + Nginx + MySQL + Tomcat
  4. Use Nginx to reverse proxy multiple Tomcat servers, and use Nginx
    to receive requests on each Tomcat server, such as LNNMT

    • LNNMT:Linux + Nginx + Nginx + MySQL + Tomcat

2. Load balancing strategy

  1. Round Robin
    Weighted round robin is the default load balancing strategy of the upstream module.
    Each request will be distributed to different backend servers one by one in chronological order . By default, the weight of each server is 1. If the server hardware
    performance is quite different, you need to assign different weights to different servers.
upstream serverpool{
    
    
   server localhost:8000;  # server指令指明后端服务器
   server localhost:9000;
   server www.suosuoli.cn weight=1 fail_timeout=5s max_fail=3;
}

server {
    
    
    listen 88;
    server_name localhost;
    location / {
    
    
        proxy_pass http://serverpool/;
    }
}

Description of server directive parameters in upstream module:

parameter describe
fail_timeout Used in conjunction with max_fails
max_fails Set the maximum number of failures within the time set by the fail_timeout parameter. If all requests to the server fail within this time, the server will be considered to be down.
fail_time The length of time the server will be considered to be down, the default is 10s
backup Mark this server as a standby server. When the master server is stopped, requests are sent to it
down Tag server is down permanently
  1. ip_hash
    specifies that the load balancer is assigned based on the client IP. This method ensures that requests from the same client are
    always sent to the same server to ensure the session session between the client and the server. In this way, each
    visitor visits a fixed backend server, which can solve the problem that sessions cannot cross servers.
upstream serverpool{
    
    
   1p_hash;
   server 192.168.192.122:8080 weight=3;
   server 192.168.192.133:8080 weight=1;
}

3.Tomcat session sharing

When single-machine Tomcat evolves into multi-machine and multi-level deployment, a problem emerges, which is
Session. The origin of this problem is that the future development of the HTTP protocol was not considered at the beginning of its design.

Stateless, connection and short connection characteristics of HTTP

  • Stateless: It means that the server cannot know the connection between the two requests. Even if the two requests come
    from the same browser, there is no data to judge the request from the same browser. Later, it can
    be judged through the cookie and session mechanism.
    • When the browser requests the server through HTTP for the first time, using the session technology on the server,
      a random value, namely SessionID, can be generated on the server and sent to the browser, and the browser
      will keep the SessionID in the cookie after receiving it. , this cookie value generally cannot be stored persistently
      , and it disappears when the browser is closed.
      The browser will pass the SessionID to the server every time it submits an HTTP request , and the server can know who is currently accessing through comparison.
    • Session is usually saved in the server-side memory, if it is not persisted, it is easy to lose
    • Session will expire periodically. If the browser visits again after the expiration, the server will find that there is no such ID, and will
      resend a new SessionID to the browser
    • Changing the browser will also re-obtain the new SessionID
  • There is a connection: because HTTP1.x is based on the TCP protocol and is connection-oriented, it needs 3 handshakes and 4
    handshakes to disconnect.
  • Short connection: Before Http 1.1, there was one connection per request, but Tcp connection creation and destruction costs are high
    , which has a great impact on the server. Therefore, since Http 1.1, keep-alive is supported, and it is also enabled by default.
    After a connection is opened, it will be kept for a period of time (can be set), and the browser will use
    this , which reduces the pressure on the server and improves efficiency.

If the application requires users to log in, and Nginx acts as a reverse proxy server to proxy the backend Tomcat to receive requests,
then at this time, Session-related problems will occur.

To solve the problem caused by the reverse proxy service that the request of the same client cannot find its corresponding
Session, the Session sharing mechanism can be used. There are several options:

3.1 ip_hash strategy

The request initiated by one user will only be dispatched to a specific TomcatA server for processing, and
the request initiated by another user will only be processed on TomcatB. Then at this time, requests initiated by the same user will be forwarded to one of the Tomcats
through the nginx policy.ip_hash

In the reverse proxy of Nginx, the ip_hash policy is also called source ip, that is, the source address hash; if
HAProxy is used as the proxy server, cookies can be used to keep the session.

3.2 Session replication cluster

The principle of session replication is to replicate all different sessions on
all Tomcat hosts in the cluster through the internal multicast mechanism of the Tomcat cluster, that is, all Tomcat servers store all current session
information.

shortcoming

  • Tomcat's synchronization nodes should not be too many, because instant messaging and synchronizing sessions with each other requires too much bandwidth
  • Each has all sessions, and the memory usage is too much

3.3 Session Server

Session shared server, the shared Session server uses memcached or redis as storage
to store session information for query by Tomcat server.

3.4 Simple Nginx Scheduling and Session Sharing Example

3.4.1 Example usage environment

environmental planning

IP CPU name Serve Remark
192.168.142.151 t0 scheduler Nginx、HTTPD
192.168.142.152 t1 tomcat1 JDK8、Tomcat8
192.168.142.153 t2 tomcat2 JDK8、Tomcat8

Each host uses the hosts file to resolve ip

192.168.142.151 t0.suosuoli.cn t0
192.168.142.152 t1.suosuoli.cn t1
192.168.142.153 t2.suosuoli.cn t2

3.4.2 Tomcat configuration

First write the jsp used for the test: located at the t1 and t2 nodes/data/webapps/index.jsp

<%@ page import="java.util.*" %>
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>lbjsptest</title>
</head>
<body>
<div>On  <%=request.getServerName() %></div>
<div><%=request.getLocalAddr() + ":" + request.getLocalPort() %></div>
<div>SessionID = <span style="color:blue"><%=session.getId() %></span></div>
<%=new Date()%>
</body>
</html

t1 virtual host configuration

<Engine name="Catalina" defaultHost="t1.suosuoli.cn">
    <Host name="t1.suosuoli.cn" appBase="/data/webapps" autoDeploy="true" />
</Engine>

t2 virtual host configuration

<Engine name="Catalina" defaultHost="t2.suosuoli.cn">
    <Host name="t2.suosuoli.cn" appBase="/data/webapps" autoDeploy="true" />
</Engine>

The configuration of each Tomcat is similar:

# 环境变量配置
vim /etc/profile.d/tomcat.sh
export CATALINA_HOME=/usr/local/tomcat
export PATH=$CATALINA_HOME/bin:$PATH

# 项目路径配置
mkdir -pv /data/webapps/ROOT

# 编写测试jsp文件,内容在上面
vim /data/webapps/index.jsp
scp -r server.xml 192.168.142.153:/usr/local/tomcat

# 启动Tomcat服务
 startup.sh

3.4.3 Nginx configuration

upstream backendpool {
    
    
        #ip_hash; # 先禁用观察轮询的sessionid变化,之后开启开session黏性
        server t1.suosuoli.cn:8080;
        server t2.suosuoli.cn:8080;
    }

    server {
    
    
        location ~* \.(jsp|do)$ {
    
    
            proxy_pass http://backendpool;
        }
    }

Access test http://t0.suosuoli.cn/index.jsp, you can see the effect of polling scheduling.
Use the ip_hash command in the upstream to use the client IP address Hash. This hash
value uses the first 24 bits of the IP v4 address or the entire IP v6 address.
After configuring the reload nginx service. Test the observation effect. Close the Tomcat service corresponding to the Session
, restart Tomcat, and observe the changes of the Session.

3.5 Simple Httpd Scheduling

Use httpd -Mthe command to see proxy_balancer_modulethe module, which is used by httpd for
load balancing. When Tomcat and Httpd are integrated, two protocols can be used to achieve request load balancing:

Way dependent module
http load balancing mod_proxy mod_proxy_http mod_proxy_balancer
ajp load balancing mod_proxy mod_proxy_ajp mod_proxy_balancer

3.5.1 Httpd configuration instructions

# 关闭httpd默认主机
~$ cd /etc/httpd/conf
~$ vim httpd.conf
# 注释掉 DocumentRoot "/var/www/html"
~$ cd ../conf.d
~$ vim vhosts.conf  # 编辑虚拟主机
    ...
    # 配置代理到balancer
    ProxyPass [path] !|url [key=value [key=value ...]]
    # Balancer成员
    BalancerMember [balancerurl] url [key=value [key=value ...]]
    # 设置Balancer或参数
    ProxySet url key=value [key=value ...]
    ...
~$ httpd -t
~$ systemctl start httpd

ProxyPassand BalancerMemberinstruction parameter description

parameter default value illustrate
min 0 Connection pool minimum capacity
max 1 ~ n Connection pool maximum capacity
retry 60 The amount of time in seconds apache waits after sending a request to the backend server with an error. 0 means retry immediately

Balancer parameter description

parameter default value illustrate
loadfactor - Define the load balancing backend server weight, the value range is 1 - 100
lbmethod byrequests Load balancing scheduling method. byrequests is scheduled based on the number of statistical requests based on weight; bytrafficz performs weight-based traffic count scheduling; bybusyness is scheduled by considering the current load of each back-end server
maxattempts 1 The number of failover times before giving up the request, the default is 1, and the maximum value should not be greater than the total number of nodes
nofailover Off If the backend server does not have a copy of the Session, it can be set to On to not allow failover. Off Failover can
stickysession - The sticky session name of the scheduler can be set to JSESSIONID or PHPSESSIONID according to the web background programming language

The ProxySet directive can also take the above parameters. For example, the configuration example below:

<Proxy "balancer://hotcluster">
    BalancerMember "http://www2.example.com:8080" loadfactor=1
    BalancerMember "http://www3.example.com:8080" loadfactor=2
    ProxySet lbmethod=bytraffic
</Proxy>
<Proxy "http://backend">
    ProxySet keepalive=On
</Proxy>
ProxySet "balancer://foo" lbmethod=bytraffic timeout=15
ProxySet "ajp://backend:7001" timeout=15

conf.d/vhosts.confThe content is as follows

<VirtualHost *:80>
    ProxyRequests     Off
    ProxyVia          On
    ProxyPreserveHost On
    ProxyPass        / balancer://lbtomcats/
    ProxyPassReverse / balancer://lbtomcats/
</VirtualHost>
<Proxy balancer://lbtomcats>
    BalancerMember http://t1.suosuoli.cn:8080 loadfactor=1
    BalancerMember http://t2.suosuoli.cn:8080 loadfactor=2
</Proxy>

The loadfactor is set to 1:2 for easy observation. The results of the observation schedule are round-robin.

``Use session stickiness - modify conf.d/vhosts.conf**

Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/"
env=BALANCER_ROUTE_CHANGED
<VirtualHost *:80>
    ProxyRequests     Off
    ProxyVia          On
    ProxyPreserveHost On
    ProxyPass        / balancer://lbtomcats/
    ProxyPassReverse / balancer://lbtomcats/
</VirtualHost>
<Proxy balancer://lbtomcats>
    BalancerMember http://t1.suosuoli.cn:8080 loadfactor=1 route=Tomcat1
    BalancerMember http://t2.suosuoli.cn:8080 loadfactor=2 route=Tomcat2
    ProxySet stickysession=ROUTEID
</Proxy>

Observation will find that the Session remains unchanged, always looking for the same Tomcat server.

ajp scheduler-modifyconf.d/vhosts.conf

<VirtualHost *:80>
    ProxyRequests     Off
    ProxyVia          On
    ProxyPreserveHost On
    ProxyPass        / balancer://lbtomcats/
    ProxyPassReverse / balancer://lbtomcats/
</VirtualHost>
<Proxy balancer://lbtomcats>
    BalancerMember ajp://t1.suosuoli.cn:8009 loadfactor=1 route=Tomcat1
    BalancerMember ajp://t2.suosuoli.cn:8009 loadfactor=2 route=Tomcat2
    #ProxySet stickysession=ROUTEID
</Proxy>

ProxySet stickysession=ROUTEIDFirst disable it to see the switching effect, and then turn it on to see the sticking effect.
After turning it on, I found that the Session has not changed, and I have been looking for the same Tomcat server.

Although, the above approach enables the client to find the same Tomcat within a period of time, so as to avoid
session loss after switching. But if the Tomcat node hangs up, the Session is still lost.

Assume that there are two nodes A and B, both of which have made the session persistent. If
the user switches to Tomcat B while the Tomcat A service is offline, the user will obtain the Tomcat B Session, even if
Tomcat A with the persistent Session goes online, it is useless.

3.5.2 Tomcat Configuration Instructions

Engine uses jvmRoute attribute in tomcat configuration

t1、t2的tomcat配置中分别增加jvmRoute
<Engine name="Catalina" defaultHost="t1.suosuoli.cn" jvmRoute="Tomcat1">
<Engine name="Catalina" defaultHost="t2.suosuoli.cn" jvmRoute="Tomcat2">

In this way, the SessionID when generating the session becomes like this
SessionID = 9C949FA4AFCBE9337F5F0669548BD4DF.Tomcat2

4. Example of Tomcat cluster using multicast replication session

Configuration detailed reference

Configuration example:

<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
         channelSendOptions="8">
  <Manager className="org.apache.catalina.ha.session.DeltaManager"
           expireSessionsOnShutdown="false"
           notifyListenersOnReplication="true"/>
  <Channel className="org.apache.catalina.tribes.group.GroupChannel">
    <Membership className="org.apache.catalina.tribes.membership.McastService"
                address="230.100.100.8"
                port="45564"
                frequency="500"
                dropTime="3000"/>
    <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
              address="auto"
              port="4000"
              autoBind="100"
              selectorTimeout="5000"
              maxThreads="6"/>
    <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
      <Transport
className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
    </Sender>
    <Interceptor
className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
    <Interceptor
className="org.apache.catalina.tribes.group.interceptors.MessageDispatchInterceptor"/>
  </Channel>
  <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
         filter=""/>
  <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
  <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
            tempDir="/tmp/war-temp/"
            deployDir="/tmp/war-deploy/"
            watchDir="/tmp/war-listen/"
            watchEnabled="false"/>
  <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>

The above configuration instructions

  • Cluster cluster configuration
  • Manager Session Manager Configuration
  • Channel channel configuration
    • Membership member judgment. What multicast address to use, the number of ports, the interval in ms, the timeout in
      ms, etc. The same multicast address and port are considered to belong to the same group. Modify this multicast address when using it
      to prevent conflicts
    • Receiver Receiver, multi-threaded to receive heartbeat and session information of multiple other nodes. By default, available ports will be tried sequentially from
      4000 to 4100.
      • address="auto", auto may be bound to 127.0.0.1, so it must be changed to
        an available IP
    • Sender is a multi-threaded sender that uses tcp connection pool internally.
    • Interceptor interceptor
  • Valve
    • ReplicationValve detects which requests need to detect Session, whether the Session data has
      changed, and the replication process needs to be started
  • ClusterListener
    • ClusterSessionListener cluster session listener

will <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
be added to <Engine>the configuration context, all virtual hosts can enable session replication. Added
to <Host>the configuration context, it means that the virtual host can enable Session replication. Finally,
the session replication function can only be used if it is enabled within the application.

The prerequisites for this example are:

  • Time synchronization to ensure proper operation of NTP or Chrony services. # systemctl status chronyd
  • Turn off firewall rules. # systemctl stop firewalld

This time, put the configuration of multicast replication into the default virtual host, that is, under Host.
Pay special attention to modify the address property of Receiver to an IP address that can be used externally by this machine.

In the server.xml of t1, as follows

<Host name="t1.magedu.com" appBase="/data/webapps" autoDeploy="true" >
    <!-- 其他略去 -->
    <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
              address="192.168.142.152"
              port="4000"
              autoBind="100"
              selectorTimeout="5000"
              maxThreads="6"/>

In the server.xml of t2, as follows

<Host name="t2.magedu.com" appBase="/data/webapps" autoDeploy="true" >
    <!-- 其他略去 -->
    <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
              address="192.168.142.153"
              port="4000"
              autoBind="100"
              selectorTimeout="5000"
              maxThreads="6"/>

After Tomcat restarts, the ss command can see that tomcat is listening on port 4000

After the above preparations, it is necessary to configure. web.xml
Add WEB-INF to the application, copy a web.xml from the global, and add sub-tags
# cp /usr/local/tomcat/conf/web.xml /data/webapps/ROOT/WEB-INF/
to the web.xml tag to enable the distribution of the application . Restart all Tomcat, and dispatch to different nodes through load balancing, the returned SessionID remains unchanged.<web-app><distributable/>

The above example is a simple session replication and sharing, which is implemented using Tomcat's built-in cluster and broadcast mechanism.
This approach can only be used in small applications, a few Tomcat applications. If there are too many Tomcat servers
, such as N servers, and a user logs in once, each Tomcat server needs to copy the Session to
another N-1server in the cluster, and a large number of copying Sessions between each other will take up
a lot of bandwidth . According to the official documentation, this solution is not suitable for more than 4 Tomcat servers.

5. Tomcat cluster uses session server to share session example

5.1 NoSQL concepts

NoSQL is a general term for non-SQL, non-traditional relational databases.
The term NoSQL was born in 1998. In 2009, the term was proposed again to refer to non-relational, distributed, and
ACID-free database design patterns.

With the advent of the Internet era, the explosive growth of data and the rapid development of database technology must adapt to new
business needs. With the advent of the mobile Internet and the Internet of Things, NoSQL is equally important in big data technology.

The importance of NoSQL databases can be seen from the ranking of databases: db-engines.com

Classification of NoSQL databases

storage type Represents a database product
Key-value Store redis、memcached
Document Store mongodb、CouchDB
Column Store column storage database (Column-Oriented DB) HBase、Cassandra
Graph DB Neo4j
Time Series time series database InfluxDB

5.2 Memcached

Memcached only supports serializable data types and does not support persistence. It is
a memory caching system based on Key-Value storage.

5.2.1 Memcached memory allocation mechanism

Application programs need to use memory to store data, but for a cache system, memory application and
memory release will be very frequent, which will easily lead to a large amount of memory fragmentation, and finally lead to no continuous available memory.

Memcached uses the Slab Allocator mechanism to allocate and manage memory.

Page: The memory space allocated to the Slab, the default is 1MB, and a Slab will be obtained after allocation. After Slab allocation,
the memory is divided into chunks according to the fixed byte size.
Chunk: The memory space used to cache and record the kv value.
Memcached will choose which chunk to store according to the size of the data . Suppose the chunk has 128bytes or 64bytes, and the data is only 100bytes stored
in the 128bytes chunk, which is a bit wasteful.
The maximum size of a Chunk is the size of a Page, that is, there is only one Chunk in a Page
Slab Class: Slabs are grouped according to size to form different Slab Classes

If there are 100bytes to be stored, then Memcached will choose Slab Class 2 storage in the above figure, because
it is a 120bytes Chunk.
The difference between slabs can be Growth Factorcontrolled using the parameter, the default is 1.25.

Lazy expirationLazy Expiration
memcached does not monitor whether the data is expired, but only checks whether it is expired when fetching data. If it is expired, the
validity period of the data is marked as 0, and the data is not cleared. This location can be overwritten later to store other data.

When the memory is insufficient, memcached will use LRU(Least Recently Used)a mechanism to find the available space and allocate it for new records.

5.2.2 Memcached Cluster

Memcached clusters are called client-based distributed clusters.
The Memcached cluster does not communicate with each other, everything requires the client to connect to the Memcached server
and organize these nodes by itself, and decide the node for data storage.

Install memcached

# yum install memcached
# rpm -ql memcached
/etc/sysconfig/memcached
/usr/bin/memcached
/usr/bin/memcached-tool
/usr/lib/systemd/system/memcached.service
/usr/share/doc/memcached-1.4.15
/usr/share/doc/memcached-1.4.15/AUTHORS
/usr/share/doc/memcached-1.4.15/CONTRIBUTORS
/usr/share/doc/memcached-1.4.15/COPYING
/usr/share/doc/memcached-1.4.15/ChangeLog
/usr/share/doc/memcached-1.4.15/NEWS
/usr/share/doc/memcached-1.4.15/README.md
/usr/share/doc/memcached-1.4.15/protocol.txt
/usr/share/doc/memcached-1.4.15/readme.txt
/usr/share/doc/memcached-1.4.15/threads.txt
/usr/share/man/man1/memcached-tool.1.gz
/usr/share/man/man1/memcached.1.gz
# cat /usr/lib/systemd/system/memcached.service
[Service]
Type=simple
EnvironmentFile=-/etc/sysconfig/memcached
ExecStart=/usr/bin/memcached -u $USER -p $PORT -m $CACHESIZE -c $MAXCONN $OPTIONS
# cat /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS=""

# 前台显示运行
# memcached -u memcached -p 11211 -f 1.25 -vv
# systemctl start memcached

To modify the running parameters of memcached, you can use the following options to modify /etc/sysconfig/memcached
the file:

  • -u usernameThe user identity of memcached running, must be a normal user
  • -pBinding port, default 11211
  • -m numMaximum memory, in MB, default 64MB
  • -c numMaximum number of connections, default 1024
  • -dRun as a daemon
  • -fGrowth Factor Growth Factor, default 1.25
  • -vDetailed information, -vv can see detailed information
  • -MMemory exhausted, LRU not allowed
  • -USet the UDP listening port, 0 means disable UDP

Connectors for different languages ​​that communicate with memcached also need to be downloaded, where C libraries and command-line tools libmemcachedare provided .

# yum list all | grep memcached
memcached.x86_64                        1.4.15-10.el7_3.1              @base
libmemcached.i686                       1.0.16-5.el7                   base
libmemcached.x86_64                     1.0.16-5.el7                   base
libmemcached-devel.i686                 1.0.16-5.el7                   base
libmemcached-devel.x86_64               1.0.16-5.el7                   base
memcached-devel.i686                    1.4.15-10.el7_3.1              base
memcached-devel.x86_64                  1.4.15-10.el7_3.1              base
opensips-memcached.x86_64               1.10.5-4.el7                   epel
php-ZendFramework-Cache-Backend-Libmemcached.noarch
php-pecl-memcached.x86_64               2.2.0-1.el7                    epel
python-memcached.noarch                 1.48-4.el7                     base
uwsgi-router-memcached.x86_64           2.0.17.1-2.el7                 epel

5.2.3 Protocol for accessing memcached

View the protocols /usr/share/doc/memcached-1.4.15/protocol.txtsupported by memcached
, and telnet also supports:

# yum install telnet
# telnet locahost 11211
stats
add mykey 1 60 4
test
STORED
get mykey
VALUE mykey 1 4
test
END
set mykey 1 60 5
test1
STORED
get mykey
VALUE mykey 1 5
test1
END

add KEY FLAGS exptime bytes, this command means to add key KEY in memcached, FLAGS
is the flag, expiration time is exptime seconds, bytes is the number of bytes of stored data.

5.3 Using MSM to realize the session shared server of tomcat cluster

5.3.1 MSM


MSM (memcached session manager) provides a program that maintains Tomcat's session to memcached or redis, which can achieve high availability. Currently the project is hosted on Github-MSM,
which supports Tomcat versions 6.x, 7.x, 8.x, 9.x, etc.

Tomcat's session management jar package, different versions of Tomcat use different packages

  • memcached-session-manager-2.3.2.jar
  • memcached-session-manager-tc8-2.3.2.jar

Serialization and deserialization of Session data

  • Official recommendationkyro
  • Under WEB-INF/lib/ in webapp

driver class

  • memcached(spymemcached.jar)
  • Redis(jedis.jar)

5.3.2 MSM installation in Tomcat

MSM installation official reference

Put spymemcached.jar, memcached-session-manage, and kyro-related jar files
into the lib directory of Tomcat. This directory is $CATALINA_HOME/lib/, generally
/usr/local/tomcat/lib.

asm-5.2.jar
kryo-3.0.3.jar
kryo-serializers-0.45.jar
memcached-session-manager-2.3.2.jar
memcached-session-manager-tc8-2.3.2.jar
minlog-1.3.1.jar
msm-kryo-serializer-2.3.2.jar
objenesis-2.6.jar
reflectasm-1.11.9.jar
spymemcached-2.12.3.jar

5.3.3 session service in sticky mode

principle

When the request ends, Tomcat's session will be sent to memcached for backup. That is, the Tomcat session
is the main session, and the memcached session is the backup session. Using memcached is equivalent to backing up
a session.

When querying the Session, Tomcat will give priority to using the Session in its own memory. Tomcat
finds the Session that is not its own through jvmRoute, then finds the Session from memcached, updates the local
Session, and updates memcached after the request is completed.

environment

<t1-tomcat服务器1>  <t2-tomcat服务器2>
        .        \ /       .
        .         X        .
        .        / \       .
<m1-tomcat服务器1>  <m2-tomcat服务器2>

t1 and m1 are deployed on the same host, and t2 and m2 are deployed on the same host.

configuration

Put the configuration $CATALINA_HOME/conf/context.xmlin
Special attention, t1 configuration is failoverNodes="nm1", t2 configuration is
failoverNodes="m2"

The following is the configuration of sticky

<Context>
  ...
  <Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
    memcachedNodes="m1:192.168.142.152:11211,m2:192.168.142.153:11211"
    failoverNodes="m1"
    requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"

 transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"
    />
</Context>

key configurationmemcachedNodes="n1:host1.yourdomain.com:11211,n2:host2.yourdomain.com:11211"

The node groups of memcached: m1 and m2 are just aliases and can be renamed. failoverNodesFor failover
nodes, m1 is the backup node and m2 is the primary storage node. Another Tomcat changes m1 to m2,
its primary node is m1, and its standby node is m2.

If the configuration is successful, after starting tomcat, you can logs/catalina.outsee the following content in

信息 [t1.magedu.com-startStop-1]
de.javakaffee.web.msm.MemcachedSessionService.startInternal --------
-  finished initialization:
- sticky: true
- operation timeout: 1000
- node ids: [m2]
- failover node ids: [m1]
- storage key prefix: null
- locking mode: null (expiration: 5s)

After the configuration is successful, visit the web page below, and see the Session on the page. Then run the following Python program
to see whether it is stored in memcached.

import memcache # pip install python-memcached
mc = memcache.Client(['192.168.142.152:11211', '192.168.142.153:11211'], debug=True)

stats = mc.get_stats()[0]
print(stats)
for k,v in stats[1].items():
    print(k, v)

print('-' * 30)
# 查看全部key
print(mc.get_stats('items')) # stats items 返回 items:5:number 1
print('-' * 30)

for x in mc.get_stats('cachedump 5 0'):# stats cachedump 5 0 # 5和上面的items返回的值有
关;0表示全部
    print(x)

t1, t2, m1, m2 start successfully in sequence, use http://t1.suosuoli.cn:8080/and
http://t2.suosuoli.cn:8080/to observe respectively.

Start the load balancing scheduler (Nginx), and http://t0.suosuoli.cnvisit to see the effect

On tomcats
192.168.142.153:8080
SessionID = 2A19B1EB6D9649C9FED3E7277FDFD470-n2.Tomcat1
Wed Jun 26 16:32:11 CST 2019
On tomcats
192.168.142.152:8080
SessionID = 2A19B1EB6D9649C9FED3E7277FDFD470-n1.Tomcat2
Wed Jun 26 16:32:36 CST 2019

You can see that the browsers are scheduled to different Tomcats, but they all get the same SessionID.

Stop t2 and n2 to see the effect, resume to see the effect.

5.3.4 session service in none-sticky mode

Principle
The non-sticky mode is supported since msm 1.4.0.

Tomcat session is a transfer session, if m1 is the main session, m2 is the backup session.
The generated new Session will be sent to the primary and standby memcached, and the local Session will be cleared.

m1 goes offline, and m2 turns positive. m1 is online again, and m2 is still the main session storage node.

memcached configuration
configuration put $CATALINA_HOME/conf/context.xmlin

<Context>
  ...
  <Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
    memcachedNodes="m1:192.168.142.152:11211,m2:192.168.142.153:11211"
    sticky="false"
    sessionBackupAsync="false"
    lockingMode="uriPattern:/path1|/path2"
    requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"

 transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"
    />
</Context>

redis configuration

Download jedis.jar, put it $CATALINA_HOME/lib/, corresponding to this installation is
/usr/local/tomcat/lib

# yum install redis
# vim /etc/redis.conf
bind 0.0.0.0
# systemctl start redis

The following configuration is placed $CATALINA_HOME/conf/context.xmlin

<Context>
  ...
  <Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
    memcachedNodes="redis://192.168.142.152:6379"
    sticky="false"
    sessionBackupAsync="false"
    lockingMode="uriPattern:/path1|/path2"
    requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"

 transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"
    />
</Context>

Summary
Through multiple sets of experiments, the session persistence mechanism is implemented using different technologies

  1. session 绑定, based on IP or session cookie. Its deployment is simple, especially based on
    session stickiness, with small granularity and little impact on load balancing. But once the backend server fails,
    the session on it is lost.
  2. session 复制集群, Based on tomcat, realize sharing and synchronizing all sessions in multiple servers.
    This method can ensure that any back-end server fails, and all the sessions are stored on the other servers, which
    has no impact on the business. But it implements heartbeat based on multicast, and TCP unicast implements replication. When there are too many device nodes,
    this replication mechanism is not a good solution. And when there are many concurrent connections,
    the memory space occupied by all sessions on a single machine is very huge, and even the memory is exhausted.
  3. session 服务器, store all sessions in a shared memory space, and use multiple
    redundant nodes to save sessions, so as to achieve high availability of session storage servers and occupy
    less memory on business servers. It is a better solution to session persistence.

The above methods have their applicability. In a production environment, you should choose a reasonable option based on actual needs.

However, all of the above methods implement session retention in memory, and you can use a database or file
system to store session data for persistence. In this way, after the server restarts, the session data can also be restored
. However, session data is time-sensitive, and whether this is necessary depends on the situation.

Guess you like

Origin blog.csdn.net/wang11876/article/details/132597063