Mybatis core knowledge combing

table of Contents

1. The principle of Mybatis architecture

1. Mybatis Getting Started This website summarizes the basics in a very detailed and quick review

2. Mybatis architecture principle

2.1 Architecture design

2.2 Main components and their relationships

2.3 Overall process

Two. Mybatis cache

(1) Research on the principle of level 1 cache and source code analysis

First level cache concept understanding

First-level cache source code analysis

(2) Secondary cache

basis:

Source code analysis:

Three. Mybatis lazy loading

(1) Lazy loading principle

Four. Mybatis plug-in

Introduction to the plug-in

Mybatis plugin introduction

Mybatis plugin principle

Plug-in interface

Custom plugin

Five. General mapper

What is a general mapper

how to use




1. The principle of Mybatis architecture

1. Mybatis Getting Started This website summarizes the basics in a very detailed and quick review

http://c.biancheng.net/view/4309.html

2. Mybatis architecture principle

2.1 Architecture design

We divide the functional architecture of Mybatis into three layers:

(1) API interface layer: Interface API provided for external use, and developers use these local APIs to manipulate the database. As soon as the interface layer receives the call request, it calls the data processing layer to complete specific data processing. There are two ways to interact between MyBatis and the database: a. Use the traditional API provided by MyBatis; b. Use the Mapper proxy

(2) Data processing layer: responsible for specific SQL search, SQL analysis, SQL execution and execution result mapping processing. Its main purpose is to complete a database operation according to the call request.

(3) Basic support layer: Responsible for the most basic functional support, including connection management, transaction management, configuration loading, and cache processing. These are common things, and they are extracted as the most basic components. Provide the most basic support for the upper data processing layer.

2.2 Main components and their relationships

Construct

description

SqlSession

As the main top-level API of MyBatis work, it represents the session with the database, and completes the necessary database addition, deletion, modification, and query functions

Executor

MyBatis executor is the core of MyBatis scheduling, responsible for the generation of SQL statements and the maintenance of query cache

StatementHandler

Encapsulates the JDBC Statement operation and is responsible for the operation of the JDBC statement, such as setting parameters and converting the Statement result set into a List set.

ParameterHandler

Responsible for converting the parameters passed by the user into the parameters required by the JDBC Statement

ResultSetHandler

Responsible for converting the ResultSet result set object returned by JDBC into a List type collection

TypeHandler

Responsible for the mapping and conversion between java data types and jdbc data types

MappedStatement

MappedStatement maintains a package of <select|update|insert|delete> node

SqlSouce

Responsible for dynamically generating SQL statements according to the parameterObject passed by the user, and sealing the information

BoundSql

Represents dynamically generated SQL statements and corresponding parameter information

Diagram:

2.3 Overall process

  1. Load configuration and initialize trigger conditions: Load configuration file configuration comes from two places, one is the configuration file (main configuration file conf.xml, mapper file *.xml), the other is the annotation in the java code, the content of the main configuration file Parse and encapsulate into Configuration, load the configuration information of SQL into a mappedstatement object, and store it in memory
  2. Receiving the call request Trigger condition: call the API provided by Mybatis. Incoming parameters: SQL ID and incoming parameter objects. Processing process: Pass the request to the lower request processing layer for processing.
  3. Processing operation request Trigger condition: API interface layer passes the request. Incoming parameters: SQL ID and incoming parameter object. Process: (A) Find the corresponding MappedStatement object according to the SQL ID. (B) Analyze the MappedStatement object according to the incoming parameter object to obtain the final SQL to be executed and execute the incoming parameters. (C) Get the database connection, execute the database according to the final SQL statement and execution parameters, and get the execution result. (D) According to the result mapping configuration in the MappedStatement object, transform the execution result obtained, and get the final processing result. (E) Release connection resources.
  4. Return processing result Return the final processing result.

2. Mybatis cache

(1) Research on the principle of level 1 cache and source code analysis

First level cache concept understanding

For example, a user table User contains id and name fields:

1. Initiating a query for user information with user id 1 for the first time, first check whether there is user information with id 1 in the cache. If not, query the user information from the database. Obtain user information and store the user information in the first-level cache.

2. If the intermediate sqlSession executes commit operations (performs inserts, updates, deletes), the first level cache in SqlSession will be cleared. The purpose of this is to keep the latest information stored in the cache and avoid dirty reads.

3. Initiate a second query for user information with user id 1. First, check whether there is user information with id 1 in the cache. If there is in the cache, get the user information directly from the cache.

First-level cache source code analysis

The first level cache cannot bypass SqlSession, as shown below:

After looking at it, I found that among all the above methods, it seems that only clearCache() is related to the cache. Let’s start with this method directly. When analyzing the source code, we have to look at who it (type) is, its parent class and its children. Who are the classes? After understanding the above relationship, you will have a deeper understanding of this class. After a round of analysis, you may get the following flow chart:

In further analysis, after the process goes to the clear() method in Perpetualcache, its cache.clear() method will be called, so what is this cache? Clicking in and discovering that the cache is actually a private Map cache = new HashMap(); that is, a Map, so cache.clear() is actually map.clear(), that is to say, the cache is actually a map object stored locally. Every SqISession will store a reference to a map object, so when was this cache created?

Where do you think the cache is most likely to be created? I think it is Executor, why do you think so? Because Executor is an executor, used to execute SQL requests, and the method of clearing the cache is also executed in Executor, so it is very likely that the creation of the cache is also likely to be in Executor. After looking around, I found that there is a createCacheKey method in Executor, this The method is very similar to the method of creating a cache. Follow up and have a look. You find that the createCacheKey method is executed by BaseExecutor. The code is as follows

CacheKey cacheKey = new CacheKey();

//MappedStatement id

// id is the location of the Sql statement package name + class name + SQL name

cacheKey.update(ms.getId());

// offset is 0

cacheKey.update(rowBounds.getOffset());

// limit is Integer.MAXVALUE

cacheKey.update(rowBounds.getLimit());

//Specific SQL statement

cacheKey.update(boundSql.getSql());

//The update is followed by the parameters in sql

cacheKey.update(value);

...

if (configuration.getEnvironment() != null) {

// issue #176

cacheKey.update(configuration.getEnvironment().getId());

}

The creation of the cache key will go through a series of update methods. The udate method is executed by a CacheKey object. The update method finally stores the five values ​​in the list of updateList. Compare the above code and the following illustration, you should Can understand what these five values ​​are.

Here you need to pay attention to the last value, what is configuration.getEnvironment().getId(), this is actually the tag defined in mybatis-config.xml, see below

<environments default="development">

<environment id="development">

<transactionManager type="JDBC"/>

<dataSource type="POOLED">

<property name="driver" value="${jdbc.driver}"/>

<property name="url" value="${jdbc.url}"/>

<property name="username" value="${jdbc.username}"/>

<property name="password" value="${jdbc.password}"/>

</dataSource>

</environment>

</environments>

So let's get back to the topic, where should we use it after creating the cache? Will never create a cache out of thin air and not use it? Absolutely not. After we explored the first-level cache, we found that the first-level cache is more used for query operations. After all, the first-level cache is also called the query cache. Why is it called the query cache? Let's talk about it for a while. Let's first take a look at where this cache is used. We trace the query method as follows:

Override

public <E> List<E> query(MappedStatement ms, Object parameter, RowBounds

rowBounds, ResultHandler resultHandler) throws SQLException {

BoundSql boundSql = ms.getBoundSql(parameter);

//Create cache

CacheKey key = createCacheKey(ms, parameter, rowBounds, boundSql);

return query(ms, parameter, rowBounds, resultHandler, key, boundSql);

}

@SuppressWarnings("unchecked")

Override

public <E> List<E> query(MappedStatement ms, Object parameter, RowBounds

rowBounds, ResultHandler resultHandler, CacheKey key, BoundSql boundSql) throws

SQLException {

...

list = resultHandler == null? (List<E>) localCache.getObject(key): null;

if (list != null) {

//This is mainly used for processing stored procedures.

handleLocallyCachedOutputParameters(ms, key, parameter, boundSql);

} else {

list = queryFromDatabase(ms, parameter, rowBounds, resultHandler, key,

boundSql);

}

...

}

// queryFromDatabase method

private <E> List<E> queryFromDatabase(MappedStatement ms, Object parameter,

RowBounds rowBounds, ResultHandler resultHandler, CacheKey key, BoundSql

boundSql) throws SQLException {

List<E> list;

localCache.putObject(key, EXECUTION_PLACEHOLDER);

try {

list = doQuery(ms, parameter, rowBounds, resultHandler, boundSql);

} finally {

localCache.removeObject(key);

}

localCache.putObject(key, list);

if (ms.getStatementType() == StatementType.CALLABLE) {

localOutputParameterCache.putObject(key, parameter);

}

return list;

}

If it cannot be found, it will be checked from the database. In queryFromDatabase, localcache will be written. The put method of the localcache object is finally handed over to Map for storage.

private Map<Object, Object> cache = new HashMap<Object, Object>();

@Override

public void putObject(Object key, Object value) {cache.put(key, value);

}

(2) Secondary cache

The principle of the second-level cache is the same as that of the first-level cache. The first query will put the data in the cache, and then the second query will go directly to the cache to fetch it. But the first level cache is based on sqlSession, and the second level cache is based on the namespace of the mapper file, which means that multiple sqlSessions can share the second level cache area in a mapper, and if the namespaces of two mappers are the same, even if there are two A mapper, then the data queried by SQL in the two mappers will also be stored in the same secondary cache area

How to use secondary cache

1. Turn on the secondary cache

Unlike the first level cache, which is enabled by default, the second level cache needs to be manually enabled

First, add the following code to the global configuration file sqlMapConfig.xml file

<!--Enable secondary cache-->

<settings>

    <setting name="cacheEnabled" value="true"/>

  </settings>

Secondly, open the cache in the UserMapper.xml file, for example

<!--Enable secondary cache-->

<cache></cache>

We can see that there is just such an empty tag in the mapper.xml file. In fact, it can be configured here. The PerpetualCache class is the class that mybatis implements the cache function by default. We do not write the type and use the default cache of mybatis, or we can implement the Cache interface to customize the cache

public class PerpetualCache implements Cache {

private final String id;

private MapcObject, Object> cache = new HashMapC);

public PerpetualCache(St ring id) {this.id = id;

}

We can see that the bottom layer of the secondary cache is still a HashMap structure

public class User implements Serializable(

//User ID

private int id;

//username

private String username;

//User gender

private String sex;

}

After enabling the second level cache, you also need to implement the Serializable interface for the pojo to be cached. In order to remove the cached data and perform the deserialization operation, because the second level cache data storage media is diverse, not necessarily only in the memory, there may be a hard disk In, if we want to fetch this cache again, we need to deserialize it. So all pojos in mybatis implement the Serializable interface.

Two, useCache and flushCache

Configuration items such as userCache and flushCache can also be configured in mybatis. UserCache is used to set whether to disable the second-level cache. Setting useCache=false in the statement can disable the second-level cache of the current select statement, that is, every query will issue sql to query , The default is true, that is, the SQL uses the second-level cache.

<select id="selectUserByUserId" useCache="false"

resultType="com.lagou.pojo.User" parameterType="int">

select * from user where id=#{id}

</select>

In this case, the latest data sql is required for each query. It should be set to useCache=false, the secondary cache is disabled, and it is directly obtained from the database.

In the same namespace of the mapper, if there are other insert, update, and delete operations, the cache needs to be refreshed. If the cache is not refreshed, dirty reads will occur.

Set the flushCache="true" attribute in the statement configuration, which is true by default, that is, refresh the cache, if it is changed to false, it will not be refreshed. When using the cache, if you manually modify the query data in the database table, dirty reads will occur.

<select id="selectUserByUserId" flushCache="true" useCache="false"

resultType="com.lagou.pojo.User" parameterType="int">

select * from user where id=#{id}

</select>

Generally, the cache needs to be refreshed after the commit operation is executed. flushCache=true indicates that the cache is refreshed, which can avoid dirty reading of the database. So we don't need to set, the default is fine.

  1. Redis integrates secondary cache

basis:

Above we introduced the second-level cache that comes with mybatis, but this cache is a single-server work and cannot achieve distributed cache. So what is distributed cache? Suppose there are two servers 1 and 2. When a user visits server 1, the cache after query will be placed on server 1. Assuming that a user is accessing server 2 now, he cannot be on server 2 Get the cache just now,

As shown below:

In order to solve this problem, it is necessary to find a distributed cache, which is specially used to store cached data, so that the cached data of different servers are stored there, and the cached data is also fetched from it, as shown in the following figure:

As shown in the figure above, between several different servers, we use a third-party caching framework, put the cache in this third-party framework, and then no matter how many servers there are, we can get data from the cache.

Here we introduce the integration of mybatis and redis. As mentioned earlier, mybatis provides an each interface. If you want to implement your own caching logic, you can develop the cache interface. Mybatis itself implements one by default, but the implementation of this cache cannot achieve distributed cache, so we have to implement it ourselves.

Redis distributed caching is fine, mybatis provides a redis implementation class for the cache interface, which is stored in the mybatis-redis package.

achieve:

  1. pom file

<dependency>

<groupId>org.mybatis.caches</groupId>

<artifactId>mybatis-redis</artifactId>

<version>1.0.0-beta2</version>

</dependency>

2. Configuration file

Mapper.xml

<?xml version="1.0" encoding="UTF-8"?>

<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN"

"http://mybatis.org/dtd/mybatis-3-mapper.dtd">

<mapper namespace="com.lagou.mapper.IUserMapper">

<cache type="org.mybatis.caches.redis.RedisCache" />

<select id="findAll" resultType="com.lagou.pojo.User" useCache="true">

select * from user

</select>

3.Redis.properties

redis.host=localhost

redis.port=6379

redis.connectionTimeout=5000

redis.password=

redis.database=0

4. Test

@Test

public void SecondLevelCache(){

SqlSession sqlSession1 = sqlSessionFactory.openSession();

SqlSession sqlSession2 = sqlSessionFactory.openSession();

SqlSession sqlSession3 = sqlSessionFactory.openSession();

IUserMapper mapper1 = sqlSession1.getMapper(IUserMapper.class);

lUserMapper mapper2 = sqlSession2.getMapper(lUserMapper.class);

lUserMapper mapper3 = sqlSession3.getMapper(IUserMapper.class);

User user1 = mapper1.findUserById(1);

sqlSession1.close(); //Clear the first level cache

User user = new User();

user.setId(1);

user.setUsername("lisi");

mapper3.updateUser(user);

sqlSession3.commit();

User user2 = mapper2.findUserById(1);

System.out.println(user1==user2);

}

Source code analysis:

RedisCache is similar to the common implementation of Mybatis's caching scheme, except that it implements the Cache interface and uses jedis to operate the cache; however, there are some differences in the design details of the project;

public final class RedisCache implements Cache {

public RedisCache(final String id) {

if (id == null) {

throw new IllegalArgumentException("Cache instances require anID");

}

this.id = id;

RedisConfig redisConfig =

RedisConfigurationBuilder.getInstance().parseConfiguration();

pool = new JedisPool(redisConfig, redisConfig.getHost(),

redisConfig.getPort(),

redisConfig.getConnectionTimeout(),

redisConfig.getSoTimeout(), redisConfig.getPassword(),

redisConfig.getDatabase(), redisConfig.getClientName());

}

RedisCache is created by MyBatis's CacheBuilder when mybatis is started. The method of creation is very simple, which is to call RedisCache's construction method with String parameters, namely RedisCache(String id); and in the RedisCache construction method, RedisConfigurationBuilder is called to Create a RedisConfig object and use RedisConfig to create a JedisPool.

The RedisConfig class inherits JedisPoolConfig and provides the packaging of host, port and other attributes. Take a brief look at the attributes of RedisConfig:

public class RedisConfig extends JedisPoolConfig {

private String host = Protocol.DEFAULT_HOST;

private int port = Protocol.DEFAULT_PORT;

private int connectionTimeout = Protocol.DEFAULT_TIMEOUT;

private int soTimeout = Protocol.DEFAULT_TIMEOUT;

private String password;

private int database = Protocol.DEFAULT_DATABASE;

private String clientName;

The RedisConfig object is created by RedisConfigurationBuilder. Let's briefly look at the main methods of this class:

public RedisConfig parseConfiguration(ClassLoader classLoader) {

Properties config = new Properties();

InputStream input =

classLoader.getResourceAsStream(redisPropertiesFilename);

if (input != null) {

try {

config.load(input);

} catch (IOException e) {

throw new RuntimeException(

"An error occurred while reading classpath property'"

+ redisPropertiesFilename

+ "', see nested exceptions", e);

} finally {

try {

input.close();

} catch (IOException e) {

// close quietly

}

}

}

RedisConfig jedisConfig = new RedisConfig();

setConfigProperties(config, jedisConfig);

return jedisConfig;

}

The core method is the parseConfiguration method, which reads a redis.properties file from the classpath:

host=localhost

port=6379

connectionTimeout=5000

soTimeout=5000

password= database=0 clientName=

And set the content of the configuration file to the RedisConfig object, and return; next, RedisCache uses the RedisConfig class to create the JedisPool; a simple template method is implemented in RedisCache to operate Redis:

private Object execute(RedisCallback callback) {

Jedis jedis = pool.getResource();

try {

return callback.doWithRedis(jedis);

} finally {

jedis.close();

}

}

The template interface is RedisCallback, and only one doWithRedis method needs to be implemented in this interface:

public interface RedisCallback {

Object doWithRedis(Jedis jedis);

}

Next, look at the two most important methods in Cache: putObject and getObject. Through these two methods, you can view the format of mybatis-redis storing data:

@Override

public void putObject(final Object key, final Object value) {

execute(new RedisCallback() {

@Override

public Object doWithRedis(Jedis jedis) {

jedis.hset(id.toString().getBytes(), key.toString().getBytes(),

SerializeUtil.serialize(value));

return null;

}

});

}

@Override

public Object getObject(final Object key) {

return execute(new RedisCallback() {

@Override

public Object doWithRedis(Jedis jedis) {

return SerializeUtil.unserialize(jedis.hget(id.toString().getBytes(),

key.toString().getBytes()));

}

});

}

It can be clearly seen that mybatis-redis uses a hash structure when storing data, and uses the id of the cache as the key of the hash (the id of the cache in mybatis is the namespace of the mapper); the query cache in this mapper Data is used as a hash field, and the content that needs to be cached is directly stored using SerializeUtil. SerializeUtil is similar to other serialization classes and is responsible for the serialization and deserialization of objects;

Three. Mybatis lazy loading

(1) Lazy loading principle

Lazy loading: Load data when it is needed, and do not load data when it is not needed. Lazy loading is also called lazy loading.

1. Mybatis only supports the lazy loading of association objects and collection objects. Association refers to one-to-one, and collection refers to one-to-many query. In the Mybatis configuration file, you can configure whether to enable lazy loading lazyLoadingEnabled=true|false.

2. Its principle is to use CGLIB to create the proxy object of the target object. When the target method is called, enter the interceptor method, such as calling a.getB().getName(), the interceptor invoke() method finds a.getB( ) Is a null value, then it will separately send the sql saved in advance to query the associated B object, query B up, and then call a.setB(b), so the object b attribute of a has a value, and then complete a. The call of getB().getName() method. This is the basic principle of lazy loading. Of course, not only Mybatis, but almost all of them, including Hibernate, support the same principle of lazy loading.

Four. Mybatis plug-in

  1. Plug-in principle

Introduction to the plug-in

Under normal circumstances, open source frameworks will provide plug-ins or other forms of expansion points for developers to expand on their own. The benefits of this are obvious. One is to increase the flexibility of the framework. The second is that developers can expand the framework based on actual needs to enable them to work better. Taking MyBatis as an example, we can implement paging, table, monitoring and other functions based on the MyBatis plug-in mechanism. Since the plug-in has nothing to do with the business, the business cannot perceive the existence of the plug-in. Therefore, the plug-in can be implanted without any sense, and the function can be enhanced invisibly.

Mybatis plugin introduction

Mybatis is an excellent ORM open source framework that is widely used. This framework has strong flexibility and provides a simple and easy-to-use plug-in extension mechanism in the four major components (Executor, StatementHandler, ParameterHandler, and ResultSetHandler). The operation of Mybatis on the persistence layer is based on the four core objects. MyBatis supports the use of plug-ins to intercept the four core objects. For mybatis, plug-ins are interceptors to enhance the functions of core objects.

The enhancement function is essentially realized by means of the underlying dynamic proxy. In other words, the four major objects in MyBatis are all proxy objects.

The methods that MyBatis allows to intercept are as follows:

Executor (update, query, commit, rollback, etc. methods);

SQL syntax builder StatementHandler (methods such as prepare, parameterize, batch, updates query);

Parameter handler ParameterHandler (getParameterObject, setParameters methods);

The result set processor ResultSetHandler (handleResultSets, handleOutputParameters, etc. methods);

Mybatis plugin principle

When the four major objects were created

1. Each created object is not returned directly, but interceptorChain.pluginAll(parameterHandler);

2. Get all Interceptors (interfaces that the plug-in needs to implement); call interceptor.plugin(target); return the target packaged object

3. Plug-in mechanism, we can use the plug-in to create a proxy object for the target object; AOP (Aspect Oriented) our plug-in can create proxy objects for the four major objects, and the proxy object can intercept each execution of the four major objects.

Intercept

How does the plug-in intercept and attach additional functions? In the case of ParameterHandler

public ParameterHandler newParameterHandler(MappedStatement mappedStatement,

Object object, BoundSql sql, InterceptorChain interceptorChain){

  ParameterHandler parameterHandler =

mappedStatement.getLang().createParameterHandler(mappedStatement,object,sql);

parameterHandler = (ParameterHandler)

interceptorChain.pluginAll(parameterHandler);

return parameterHandler;

}

public Object pluginAll(Object target) {

for (Interceptor interceptor: interceptors) {

target = interceptor.plugin(target);

}

return target;

}

interceptorChain saves all interceptors (interceptors), which is created when mybatis is initialized. The interceptors in the interceptor chain are called to intercept or enhance the target in turn. The target in interceptor.plugin(target) can be understood as the four major objects in mybatis. The returned target is the object after being heavily proxied

If we want to intercept the query method of Executor, we can define the plugin like this:

@Intercepts({

@Signature(

type = Executor.class,

method = "query",

args=

{MappedStatement.class,Object.class,RowBounds.class,ResultHandler.class}

)

})

public class ExeunplePlugin implements Interceptor {

//Omit logic

}

In addition, we also need to configure the plug-in to sqlMapConfig.xml.

<plugins>

<plugin interceptor="com.lagou.plugin.ExamplePlugin">

</plugin>

</plugins>

In this way, MyBatis can load the plug-in at startup and save the plug-in instance to the related object (InterceptorChain, interceptor chain). After the preparation work is completed, MyBatis is in a ready state. When we execute SQL, we need to create SqlSession through DefaultSqlSessionFactory first. The Executor instance will be created in the process of creating the SqlSession. After the Executor instance is created, MyBatis will generate a proxy class for the instance through the JDK dynamic proxy. In this way, the plug-in logic can be executed before the Executor related methods are called. The above is the basic principle of the MyBatis plug-in mechanism

  1. Write a Myabtis plugin

Plug-in interface

Mybatis plug-in interface-Interceptor

• Intercept method, the core method of the plug-in

• The plugin method, which generates the proxy object of the target

• The setProperties method, passing the parameters required by the plug-in

Custom plugin

Design and implement a custom plug-in

Intercepts ({//Pay attention to the curly braces, which means that multiple @Signature can be defined here to intercept multiple places, all use this

Interceptors

@Signature (type = StatementHandler .class, //This refers to which interface is intercepted

    method = "prepare", //Which method name in this interface, don't misspell it

    args = {Connection.class, Integer .class}), this is the input parameter of the interception method, press the order

The sequence is written here, no more or less, if the method is overloaded, but the method name and input parameters must be used to determine the unique

})

public class MyPlugin implements Interceptor {

private final Logger logger = LoggerFactory.getLogger(this.getClass());

// //Here is the method of this interceptor every time an operation is performed

Override

public Object intercept(Invocation invocation) throws Throwable {

//Enhanced logic

System.out.println("The method has been enhanced...");

return invocation.proceed(); //Execute the original method

}

/**

* //The main purpose is to generate a proxy for this interceptor and put it in the interceptor chain

* ^Description wraps the target object to create a proxy object for the target object

* @Param target is the object to be intercepted

* @Return proxy object

*/

Override

public Object plugin(Object target) {

System.out.println("The target object to be packaged:"+target);

return Plugin.wrap(target,this);

 }

 

/**Get the properties of the configuration file**/

//Called when the plug-in is initialized, and only called once, the properties of the plug-in configuration are set from here

Override

  public void setProperties(Properties properties) {

System.out.println("Initialization parameter of plug-in configuration:"+properties);

}

}

sqlMapConfig.xml

<plugins>

<plugin interceptor="com.lagou.plugin.MySqlPagingPlugin">

<!--Configuration parameters-->

<property name="name" value="Bob"/>

</plugin>

</plugins

mapper interface

public interface UserMapper {

List<User> selectUser();

}

Mapper.xml

<mapper namespace="com.lagou.mapper.UserMapper">

<select id="selectUser" resultType="com.lagou.pojo.User">

SELECT

id,username

FROM

user

</select>

</mapper>

Test class

public class PluginTest {

@Test

public void test() throws IOException {

InputStream resourceAsStream =

Resources.getResourceAsStream("sqlMapConfig.xml");

SqlSessionFactory sqlSessionFactory = new

SqlSessionFactoryBuilder().build(resourceAsStream);

SqlSession sqlSession = sqlSessionFactory.openSession();

UserMapper userMapper = sqlSession.getMapper(UserMapper.class);

List<User> byPaging = userMapper.selectUser();

for (User user: byPaging) {

System.out.println(user);

}

}

}

  1. pageHelper paging plugin

MyBati s can use third-party plug-ins to expand the function, the paging assistant PageHelper is to seal the complex operations of paging

Install, use a simple way to get the relevant data of the paging

Development steps:

① Import the coordinates of the general PageHelper

② Configure the PageHelper plug-in in the mybatis core configuration file

③ Test paging data acquisition

①Import general PageHelper coordinates

<dependency>

    <groupId>com.github.pagehelper</groupId>

    <artifactId>pagehelper</artifactId>

    <version>3.7.5</version>

  </dependency>

  <dependency>

    <groupId>com.github.jsqlparser</groupId>

    <artifactId>jsqlparser</artifactId>

    <version>0.9.1</version>

  </dependency>

② Configure the PageHelper plug-in in the mybatis core configuration file

<!--Note: The plug-in configuration of the paging assistant is before the mapper of the general museum*-->*

<plugin interceptor="com.github.pagehelper.PageHelper">

<!—Specified dialect —>

<property name="dialect" value="mysql"/>

</plugin>

③ Test page code implementation

 @Test

  public void testPageHelper() {

    //Set paging parameters

    PageHelper.startPage(1, 2);

    List<User> select = userMapper2.select(null);

    for (User user: select) {

      System.out.println(user);

   }

 }

}

Get other parameters related to paging

//Other paging data

PageInfo<User> pageInfo = new PageInfo<User>(select);

System.out.println("Total number:"+pageInfo.getTotal());

System.out.println("Total Pages:"+pageInfo. getPages ());

System.out.println("Current page:"+pageInfo. getPageNum());

System.out.println("Display ten thousand length per page:"+pageInfo.getPageSize());

System.out.println("Is the first page:"+pageInfo.isIsFirstPage());

System.out.println("Is the last page:"+pageInfo.isIsLastPage());

Five.  General mapper

What is a general mapper

The general Mapper is to solve the single table addition, deletion, modification and check, based on the plug-in mechanism of Mybatis. Developers do not need to write SQL, do not need to add methods in DAO, as long as the entity class is written, the corresponding addition, deletion, and modification methods can be supported

how to use

  1. First, in the maven project, introduce mapper dependencies in pom.xml

<dependency>

<groupId>tk.mybatis</groupId>

<artifactId>mapper</artifactId>

<version>3.1.2</version>

</dependency>

  1. Complete the configuration in the Mybatis configuration file

<plugins>

<!--Paging plug-in: If there is a paging plug-in, it should be ranked before the general mapper -->

<plugin interceptor="com.github.pagehelper.PageHelper">

 <property name="dialect" value="mysql"/>

</plugin>

<plugin interceptor="tk.mybatis.mapper.mapperhelper.MapperInterceptor">

<!-- General Mapper interface, multiple general interfaces are separated by commas -->

<property name="mappers" value="tk.mybatis.mapper.common.Mapper"/>

</plugin>

</plugins>

  1. Set the primary key of the entity class

@Table(name = "t_user")

public class User {

@Id

@GeneratedValue(strategy = GenerationType.IDENTITY)

private Integer id;

private String username;

}

  1. Define a general mapper

import com.lagou.domain.User;

import tk.mybatis.mapper.common.Mapper;

public interface UserMapper extends Mapper<User> {

}

  1. test

public class UserTest {

  @Test

  public void test1() throws IOException {

    Inputstream resourceAsStream =

Resources.getResourceAsStream("sqlMapConfig.xml");

    SqlSessionFactory build = new

SqlSessionFactoryBuilder().build(resourceAsStream);

    SqlSession sqlSession = build.openSession();

    UserMapper userMapper = sqlSession.getMapper(UserMapper.class);

    User user = new User();

    user.setId(4);

    //(1)mapper basic interface

    //select interface

    User user1 = userMapper.selectOne(user); //Query according to the attributes in the entity, there can only be one return value

    List<User> users = userMapper.select(null); //Query all results

    userMapper.selectByPrimaryKey(1); //Query according to the primary key field, the method parameter must contain the complete primary key attribute, and the query condition uses the equal sign

    userMapper.selectCount(user); //Query the total number of attributes in the entity, and use the equal sign for query conditions

    // insert interface

    int insert = userMapper.insert(user); //Save an entity, the null value will also be saved, and the database default value will not be used

    int i = userMapper.insertSelective(user); //Save the entity, the null attribute will not be saved, and the database default value will be used

    // update interface

    int i1 = userMapper.updateByPrimaryKey(user);//Update all fields of the entity according to the primary key, and the null value will be updated

    // delete interface

    int delete = userMapper.delete(user); //Delete according to the entity attribute as a condition, and use the equal sign for the query condition

    userMapper.deleteByPrimaryKey(1); //Delete according to the primary key field, the method parameter must contain the complete primary key attribute

    //(2)example method

    Example example = new Example(User.class);

    example.createCriteria().andEqualTo("id", 1);

    example.createCriteria().andLike("val", "1");

    //Custom query

    List<User> users1 = userMapper.selectByExample(example);

 }

}