foreword
For current java developers, MyBatis is undoubtedly an excellent persistence framework. It can use simple XML or annotations to configure and map native information, connect interfaces with Java POJOs (Plain Old Java Objects, ordinary Java Objects) are mapped to records in the database, which is a powerful tool for operating the database. In this section, I want to share with you the caching mechanism of MyBatis, which provides first-level and second-level caches to cache data to improve query performance.
First, the first level cache
MyBatis opens the first-level cache by default. The first-level cache is the SqlSession-level cache, which means that the same SqlSession calls the same method in the same Mapper multiple times (that is, executes the same SQL statement), and only one database query is performed. After the first execution, the data queried in the database will be written to the cache, and the second time the data is directly fetched from the cache will not be queried for the second time. When SqlSession performs other operations, the cache is emptied.
Second, the second level cache
MyBatis does not turn on the second level cache by default. When it is turned on, you need to write the following code in the MaBatis configuration file:
<settings> <setting name="cacheEnabled" value="true"/> </settings>
Next, open the current second level cache in the Mapper.xml mapping file:
<cache eviction="LRU" flushInterval="60000" size="512" readOnly="true"></cache>
The second-level cache is a mapper-level cache. Multiple SqlSessions can share the second-level cache, which means that different SqlSessions execute the same Sql statement under the same namespace twice, and the second query will only query the first query and cache it. The data will not be queried in the database.
Third, the first level cache test
1. Operating environment
JDK1.8
MyBatis
Mysql5.7
Maven
Compiler: IntelliJ IDEA
2. Project structure
3. Steps
(1), Maven library
<dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.9</version> </dependency> <dependency> <groupId>org.mybatis</groupId> <artifactId>mybatis</artifactId> <version>3.1.1</version> </dependency> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.17</version> </dependency>
(2) mysql data table
mysql> desc student; +-------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(255) | YES | | NULL | | | age | int(11) | YES | | NULL | | | sex | varchar(255) | YES | | NULL | | +-------+--------------+------+-----+---------+----------------+ 4 rows in set
(3) Entity class
public class Student { private Integer id; private String name; private Integer age; private String sex; public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public Integer getAge() { return age; } public void setAge(Integer age) { this.age = age; } public String getSex() { return sex; } public void setSex(String sex) { this.sex = sex; } @Override public String toString() { return "Student{" + "id=" + id + ", name='" + name + '\'' + ", age=" + age + ", sex='" + sex + '\'' + '}'; } }
(4) Create dao layer interface
public interface StudentMapper { public Student selectStuById(Integer id)throws Exception; }
(5) Create mapper mapping file
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE mapper PUBLIC "-//mybatis.org/DTD Mapper 3.0" "http://mybatis.org/dtd/mybatis-3-mapper.dtd"> <mapper namespace="com.stone.dao.StudentMapper"> <select id="selectStuById" parameterType="int" resultType="com.stone.model.Student"> select * from student where id=#{id} </select> </mapper>
(6) Create mybatis-config.xml configuration file
<?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE configuration PUBLIC "-//mybatis.org//DTD Config 3.0//EN" "http://mybatis.org/dtd/mybatis-3-config.dtd"> <configuration> <typeAliases> <package name="com.stone.model" /> </typeAliases> <environments default="development"> <environment id="development"> <transactionManager type="JDBC" /> <dataSource type="POOLED"> <property name="driver" value="com.mysql.jdbc.Driver" /> <property name="url" value="jdbc:mysql://localhost:3306/bjsxt" /> <property name="username" value="root" /> <property name="password" value="xiaokai960201" /> </dataSource> </environment> </environments> <mappers> <mapper resource="Mapper/StudentMapper.xml" /> </mappers> </configuration>
(7) log4j log configuration
#Set the level of the log, define the output purpose of the log information log4j.rootLogger=DEBUG, A1 ,R #Define the output destination of A1 as the console log4j.appender.A1=org.apache.log4j.ConsoleAppender #Layout is PatternLayout The layout mode can be flexibly specified. log4j.appender.A1.layout=org.apache.log4j.PatternLayout #Set the output format log4j.appender.A1.layout.ConversionPattern=%-d{yyyy-MM-dd HH\:mm\:ss} [%c]-[%p] %m%n #Define the output destination of R as a file, and generate a new file when the file size reaches the specified size log4j.appender.R=org.apache.log4j.RollingFileAppender #Set the output file address log4j.appender.R.File=D:\\Test_Log4j.log #Set the file size to 100 kb When the file reaches 100, a new file will be generated, #MaxBackupIndex The maximum number of files recorded is 1. Check a file and delete the file earlier. log4j.appender.R.MaxFileSize=100KB log4j.appender.R.MaxBackupIndex=1 #The following is the same as the above log4j.appender.R.layout=org.apache.log4j.PatternLayout log4j.appender.R.layout.ConversionPattern=%p %t %c - %m%n
(8) OneCacheTest test class (same sqlsession)
public class OneCacheTest { public static void main(String[] args) throws Exception { OneCacheTest oneCacheTest=new OneCacheTest(); oneCacheTest.test1(); } public void test1() throws Exception{ SqlSession session = SqlSessionFac.getSqlSession(); StudentMapper studentMapper=session.getMapper(StudentMapper.class); Student student1=studentMapper.selectStuById(1); System.out.println(student1.toString()); Student student2=studentMapper.selectStuById(1); System.out.println(student2.toString()); Student student5=studentMapper.selectStuById(1); System.out.println(student5.toString()); } }
result:
2018-04-28 21:11:49 [org.apache.ibatis.transaction.jdbc.JdbcTransaction]-[DEBUG] Openning JDBC Connection
2018-04-28 21:11:51 [org.apache.ibatis.datasource.pooled.PooledDataSource]-[DEBUG] Created connection 1268650975.
2018-04-28 21:11:51 [com.stone.dao.StudentMapper.selectStuById]-[DEBUG] ooo Using Connection [com.mysql.jdbc.JDBC4Connection@4b9e13df]
2018-04-28 21:11:51 [com.stone.dao.StudentMapper.selectStuById]-[DEBUG] ==> Preparing: select * from student where id=?
2018-04-28 21:11:51 [com.stone.dao.StudentMapper.selectStuById]-[DEBUG] ==> Parameters: 1(Integer) Student{id=1, name='asd', age=11, sex='男'} Student{id=1, name='asd', age=11, sex='男'} Student{id=1, name='asd', age=11, sex='男'}
(9) OneCacheTest test class (different sqlsession)
public class OneCacheTest {
public static void main(String[] args) throws Exception {
OneCacheTest oneCacheTest=new OneCacheTest();
oneCacheTest.test1();
}
public void test1() throws Exception{
SqlSession session1 = SqlSessionFac.getSqlSession();
StudentMapper studentMapper1=session1.getMapper(StudentMapper.class);
Student student1=studentMapper1.selectStuById(1);
System.out.println(student1.toString());
session1.close();
SqlSession session2 = SqlSessionFac.getSqlSession();
session2 = SqlSessionFac.getSqlSession();
StudentMapper studentMapper2=session2.getMapper(StudentMapper.class);
Student student2=studentMapper2.selectStuById(1);
System.out.println(student2.toString());
session2.close();
}
}
result:
2018-04-28 21:20:13 [org.apache.ibatis.transaction.jdbc.JdbcTransaction]-[DEBUG] Openning JDBC Connection
2018-04-28 21:20:13 [org.apache.ibatis.datasource.pooled.PooledDataSource]-[DEBUG] Created connection 1268650975.
2018-04-28 21:20:13 [com.stone.dao.StudentMapper.selectStuById]-[DEBUG] ooo Using Connection [com.mysql.jdbc.JDBC4Connection@4b9e13df]
2018-04-28 21:20:13 [com.stone.dao.StudentMapper.selectStuById]-[DEBUG] ==> Preparing: select * from student where id=?
2018-04-28 21:20:13 [com.stone.dao.StudentMapper.selectStuById]-[DEBUG] ==> Parameters: 1(Integer)
Student{id=1, name='asd', age=11, sex='男'}
2018-04-28 21:20:14 [org.apache.ibatis.transaction.jdbc.JdbcTransaction]-[DEBUG] Openning JDBC Connection
2018-04-28 21:20:14 [org.apache.ibatis.datasource.pooled.PooledDataSource]-[DEBUG] Created connection 1620303253.
2018-04-28 21:20:14 [com.stone.dao.StudentMapper.selectStuById]-[DEBUG] ooo Using Connection [com.mysql.jdbc.JDBC4Connection@6093dd95]
2018-04-28 21:20:14 [com.stone.dao.StudentMapper.selectStuById]-[DEBUG] ==> Preparing: select * from student where id=?
2018-04-28 21:20:14 [com.stone.dao.StudentMapper.selectStuById]-[DEBUG] ==> Parameters: 1(Integer)
Student{id=1, name='asd', age=11, sex='男'}
(10) Summary
It can be seen from the above results that the same SqlSession calls the same method in the same Mapper multiple times (that is, executing the same SQL statement) will only perform one database query, and the data queried in the database will be queried after the first execution. Write to the cache, and fetch the data directly from the cache for the second time without performing the second database query. Different sqlsessions will perform database queries, so the first-level cache is at the SqlSession level.
Fourth, the second level cache test
(1) Turn on the second level cache in the configuration tag in Mybatis-config
<settings> <!--Enable L2 cache--> <setting name="cacheEnabled" value="true"/> </settings>
(2) Open the second level cache under the namespace of the current mapper in the Mapper mapping file
<cache eviction="LRU" flushInterval="60000" size="512" readOnly="true"></cache>
(3) Create the TwoCacheTest test class
public class OneCacheTest { public static void main(String[] args) throws Exception { OneCacheTest oneCacheTest=new OneCacheTest(); oneCacheTest.test1(); } public void test1() throws Exception{ // Get SqlSession object SqlSession session1 = SqlSessionFac.getSqlSession(); StudentMapper studentMapper1 = session1.getMapper(StudentMapper.class); Student student = studentMapper1.selectStuById(1); System.out.println(student.toString()); // Close session1.close(); // Get the SqlSession object again SqlSession session2 = SqlSessionFac.getSqlSession(); StudentMapper studentMapper2 = session2.getMapper(StudentMapper.class); Student student2 = studentMapper2.selectStuById(1); System.out.println(student2.toString()); session2.close(); } }
result:
2018-04-28 22:15:38 [org.apache.ibatis.transaction.jdbc.JdbcTransaction]-[DEBUG] Openning JDBC Connection
2018-04-28 22:15:38 [org.apache.ibatis.datasource.pooled.PooledDataSource]-[DEBUG] Created connection 94345706.
2018-04-28 22:15:38 [com.stone.dao.StudentMapper.selectStuById]-[DEBUG] ooo Using Connection [com.mysql.jdbc.JDBC4Connection@59f99ea]
2018-04-28 22:15:38 [com.stone.dao.StudentMapper.selectStuById]-[DEBUG] ==> Preparing: select * from student where id=?
2018-04-28 22:15:38 [com.stone.dao.StudentMapper.selectStuById]-[DEBUG] ==> Parameters: 1(Integer)
Student{id=1, name='asd', age=11, sex='男'}
2018-04-28 22:15:38 [org.apache.ibatis.transaction.jdbc.JdbcTransaction]-[DEBUG] Resetting autocommit to true on JDBC Connection [com.mysql.jdbc.JDBC4Connection@59f99ea]
2018-04-28 22:15:38 [org.apache.ibatis.transaction.jdbc.JdbcTransaction]-[DEBUG] Closing JDBC Connection [com.mysql.jdbc.JDBC4Connection@59f99ea]
2018-04-28 22:15:38 [org.apache.ibatis.datasource.pooled.PooledDataSource]-[DEBUG] Returned connection 94345706 to pool.
2018-04-28 22:15:38 [org.apache.ibatis.cache.decorators.LoggingCache]-[DEBUG] Cache Hit Ratio [com.stone.dao.StudentMapper]: 0.5
Student{id=1, name='asd', age=11, sex='男'}
(4) It can be seen from the results that
After the second level cache is enabled, different sql sessions access the same parameter of the same sql. After the first query from the database, the second query directly fetches data from the cache without going through the database.