Thinking about general import and export functions

Use JSON configuration to achieve generality and dynamic adjustment. Of course, this generalization still has certain limitations. The code style of each project is different. It is not easy to write a generalized module suitable for all projects. Here The common use is limited to the project where it is located, so if the function code is not applicable to your own project, I hope you can use it as a reference and make a slight modification.

So now let's analyze what JSON configuration items we will need.

export

Basic configuration items

Start with the simplest export. The exported data should support detection through the business layer, such as: Service.search(param) . This is the main premise.

Then, in order to support the display of the export progress, the business layer also needs to provide a quantity query method, such as: Service.count(param) , otherwise the export progress cannot be realized.

Finally, the export file name can also be customized, such as: filename

The configuration items can be obtained from the above:

  • serviceClazz : business class path, such as: com.cc.service.UserService, required
  • methodName : query method name, such as: listByCondition, required
  • countMethodName : Quantity query method name, optional, used to support export progress
  • filename : export filename
  • searchParams : query parameters, array type, dictionary elements. Arrays are used to support the case where the query method needs to pass multiple parameters

As for the parameter class of the query method, there is no need to fill in, because we can obtain the parameter type that the method needs to pass in through reflection (note that the key codes posted below are for reference only):

 
 

java

copy code

Class<?> serviceClass = Class.forName(param.getServiceClazz()); // param为请求参数类 Method searchMethod = ReflectUtil.findMethodByName(serviceClass, param.getMethodName()); // 方法所需要传入的参数列表 Class<?>[] parameterTypes = searchMethod.getParameterTypes();

 
 

java

copy code

/** * 通过反射从指定类中获取方法对象 */ public static Method findMethodByName(Class<?> clazz, String name) { Method[] methods = clazz.getMethods(); if (StringUtils.isEmpty(name)) { return null; } for (Method method : methods) { if (method.getName().equals(name)) { return method; } } return null; }

Now let's think about what scenarios will be exported:

  1. The pagination query of the list page may be the current page data export, or all data export, which involves paging query
  2. The query on the data overview page is usually a complex link table query customized by the developer, and does not require paging

Then this article implements the first version of the general export function for the above two situations.

Pagination query for list pages

The data export of the list page is divided into current page export and all data export .

Suppose the query process is like this:

  1. The interface layer receives parameters: Controller.search(Param param)
  2. The business layer calls the query method: Service.search(param)
  3. The persistence layer accesses the database: Mapper.search(param)

This case is simple, but if the flow is like this:

  1. The interface layer receives parameters: Controller.search(Param param)
  2. The business layer calls the query method: Service.search(new Condition(param))
  3. The persistence layer accesses the database: Mapper.search(condition)

In the above code, the interface request parameters are inconsistent with the persistence layer parameters, and have been packaged in the business layer, so this situation should also be handled in a compatible manner.

But if the request parameters go through the package in the package in the business layer, then forget it.

Next is the paging parameter, we use pageNum and pageSize to represent the page number and quantity fields, similar to:

 
 

json

copy code

{ "pageNum": 1, "pageSize": 10, "name": "老刘" // 此为查询字段,如查询名字为老刘的数据 }

Regarding the current page export and all data export, you can use a bool to represent: onlyCurrentPage , the default is false, that is, the data will be automatically paged and queried when exporting, until all the data is queried. When exporting all the data, the page query is necessary and can improve performance. To avoid memory overflow, when onlyCurrentPage is true, only the current page data is exported.

The required configuration items are obtained as follows:

  • searchParam : interface pagination request parameter, JSON type, required
  • conditionClazz : conditional query class, can also be considered as a wrapper class, such as: com.cc.codition.UserCondition, can be filled
  • onlyCurrentPage : Only the current page is exported, the default is false, and it can be filled

Query on the data overview page

Data overview There is no quantity query method for data, that is, Service.count(xxx), and there is no pagination query parameter, similar to the current page export. In the case of only one layer of packaging class, there are no additional configuration items, and the above is enough. Now, it should be noted that the pagination parameters have to be removed from the code.

header configuration

Level 1 header

Simulate some data to deepen understanding. An existing interface is to query the system user list , such as: /user/search, and the returned result is as follows:

 
 

json

copy code

{ "code": 0, "msg": "请求成功", "data": [ { "id": 1, "username": "admin", "nickname": "超管", "phone": "18818881888", "createTime": "2023-06-23 17:16:00" }, { "id": 2, "username": "cc", "nickname": "管理员", "phone": "18818881888", "createTime": "2023-06-23 17:16:00" }, ... ] }

Now post the code of EasyExcel:

 
 

java

copy code

// 创建excel文件 try (ExcelWriter excelWriter = EasyExcel.write(path).build()) { WriteSheet writeSheet = EasyExcel.writerSheet("sheet索引", "sheet名称").head(getHeader()).build(); excelWriter.write(getDataList(), writeSheet); }

 
 

json

copy code

// 模拟表头 private static List<List<String>> getHeader() { List<List<String>> list = new ArrayList<>(); list.add(createHead("账号")); list.add(createHead("昵称")); list.add(createHead("联系方式")); list.add(createHead("注册时间")); return list; } public static List<String> createHead(String... head) { return new ArrayList<>(Arrays.asList(head)); } // 模拟数据 public static List<List<Object>> getDataList() { List<List<Object>> list = new ArrayList<>(); list.add(createData("admin", "超管", "18818881888", "2023-06-23 17:16:00")); list.add(createData("cc", "管理员", "18818881888", "2023-06-23 17:16:00")); return list; } public static List<Object> createData(String... data) { return new ArrayList<>(Arrays.asList(data)); }s

Then the export effect is like this:

Don't care about the excel style of the effect picture now, we will perform dynamic configuration later, such as column width, header background color, font centering, etc.

Although we have written the code to death above, smart developers must know how to convert the data from the database query into the corresponding format, so this section is skipped.

Now we can get the basic header configuration:

 
 

json

copy code

"customHeads": [ { "fieldName": "username", "fieldNameZh": "账号" }, { "fieldName": "nickname", "fieldNameZh": "昵称" }, { "fieldName": "phone", "fieldNameZh": "联系方式" }, { "fieldName": "createTime", "fieldNameZh": "注册时间" } ]

That is:

  • fieldName : attribute name, so that the attribute and value can be found through reflection from the data object of the returned result
  • fieldNameZh : The attribute name is definitely not suitable as the header name, add a Chinese description to replace the attribute name as the header

With the above foundation, we can add more items to achieve richness of functions, such as

 
 

json

copy code

{ "fieldName": "username", "fieldNameZh": "账号", "width": 20, // 列宽 "backgroundColor": 1, // 表头背景色 "fontSize": 20, // 字体大小 "type": "date(yyyy-MM-dd)" // 字段类型 ... }

Note: The field type can be used for data formatting. For example, the attribute is a status status, 1 means normal, and 2 means abnormal. Then it is meaningless to export this 1 or 2, so the Chinese corresponding to this status value can be identified through the field type Description, such an export is normal.

One-level table headers can already meet many of our scenarios, but it is not enough. In my experience, two-line table headers or even complex table headers are often needed. Fortunately, EasyExcel supports multi-level table headers.

multi-level header

First post the sample code for EasyExcel to generate the second-level header :

 
 

java

copy code

// 模拟表头 private static List<List<String>> getHeader() { List<List<String>> list = new ArrayList<>(); list.add(createHead("用户信息", "账号")); list.add(createHead("用户信息", "昵称")); list.add(createHead("用户信息", "联系方式")); list.add(createHead("用户信息", "注册时间")); list.add(createHead("角色信息", "超管")); list.add(createHead("角色信息", "管理员")); return list; } public static List<String> createHead(String... head) { return new ArrayList<>(Arrays.asList(head)); } // 模拟数据 public static List<List<Object>> getDataList() { List<List<Object>> list = new ArrayList<>(); list.add(createData("admin", "超管", "18818881888", "2023-06-23 17:16:00", "是", "是")); list.add(createData("cc", "管理员", "18818881888", "2023-06-23 17:16:00", "否", "是")); return list; } public static List<Object> createData(String... data) { return new ArrayList<>(Arrays.asList(data)); }

The effect is like this:

It can be seen that the first four columns have a common header [User Information], and the last two columns have a common header [Role Information].

From the sample code above, we know that in order to merge the headers, the data list must be in order and have the same header name, so that it will be recognized by EasyExcel and then have the merge effect, which needs attention.

Similarly, when we need to generate complex headers, we can do this:

 
 

java

copy code

// 模拟表头 private static List<List<String>> getHeader() { List<List<String>> list = new ArrayList<>(); list.add(createHead("导出用户数据", "用户信息", "账号")); list.add(createHead("导出用户数据", "用户信息", "昵称")); list.add(createHead("导出用户数据", "用户信息", "联系方式")); list.add(createHead("导出用户数据", "用户信息", "注册时间")); list.add(createHead("导出用户数据", "角色信息", "超管")); list.add(createHead("导出用户数据", "角色信息", "管理员")); return list; }

Renderings:

in conclusion

The above is my thinking and implementation of the export function. Due to space constraints, I have not posted the complete code, but I believe the above content is enough for everyone to use as a reference. For missing content, such as column width, color font and other settings, please refer to EasyExcel official documents to achieve, the main way is to dynamically configure the EasyExcel export file according to the JSON configuration information passed from the front end.

import

Import is a two-step process:

  1. User download import template
  2. The user fills in the content into the import template, and then uploads the template file to the system to realize the data import operation

Download import template

Importing templates only requires the customHeads parameter above :

 
 

json

copy code

"customHeads": [ { "fieldName": "username", "fieldNameZh": "账号" }, { "fieldName": "nickname", "fieldNameZh": "昵称" }, { "fieldName": "phone", "fieldNameZh": "联系方式" }, { "fieldName": "createTime", "fieldNameZh": "注册时间" } ]

Even the fieldName can be omitted, and an excel file with only the header is generated.

Import Data

There are two scenarios for importing data:

  1. Single table data import , the scenario is very simple
  2. Complex data import involves multiple tables, which is a little more complicated
Single table data import

The single table only needs to consider the attributes of the corresponding entity class. We can obtain the attributes of the entity class through reflection, so the required configuration items are:

  • modelClazz : Entity class path, such as: com.cc.entity.User

Configuration example:

 
 

json

copy code

{ "modelClazz": "com.cc.entity.User", "customHeads": [ { "fieldName": "username", "fieldNameZh": "账号" }, { "fieldName": "nickname", "fieldNameZh": "昵称" }, { "fieldName": "phone", "fieldNameZh": "联系方式" }, { "fieldName": "createTime", "fieldNameZh": "注册时间" } ] }

In this way, when the data is imported and each row of data is read by EasyExcel, it can be recognized that the username item corresponds to the username attribute of the com.cc.entity.User class, and then things like this can be done:

 
 

java

copy code

User user = new User(); user.setUsername(fieldName列的值)

From this, you can get a List<User> userList array, and then save it to the database through the system's UserService or UserMapper to realize the data import operation.

complex data import

Complex data such as this scenario: the data of each row in the excel file is like this:

account Nick name Contact information Registration time character name
admin super tube 18818881888 2023-06-23 17:16:00 super administrator
cc administrator 18818881888 2023-06-23 17:16:00 administrator

Among them, whether it is super-management and whether it is an administrator involves the association table:

  • User table: tb_user
  • Role table: tb_role
  • User role relation table: tb_user_role_relation

In order to support this complex data import, the system needs to provide a corresponding storage method:

  1. Create a new DTO class:

    The first:

     java 

    copy code

    public class UserDto { private String username; private String nickname; private String phone; private Date createTime; private Boolean superAdminFlag; private Boolean adminFlag; }

    The second type:

    java

    copy code

    public class UserDto { private User user; private Role role; }

    We should consider both of these two DTO situations. Needless to say, the first one can be dealt with by the above configuration. We mainly look at the second one. The second method needs to consider the issue of "path ", so the writing method of customHeads must have changed by:

     json 

    copy code

    { "modelClazz": "com.cc.model.UserDto", "customHeads": [ { "fieldName": "user.username", "fieldNameZh": "账号" }, ... ] }

    In this way, the account path is configured as: user.username, and the reflection query of attributes must have a recursive concept. First, search for the user attribute of the UserDto class, obtain the class of the attribute, and then obtain the username attribute in it, and the assignment method becomes :

    java

    copy code

    UserDto dto = new UserDto(); User user = new User(); user.setUsername(fieldName列的值); dto.setUser(user);

    This results in a List<UserDto> dtoList array.

  2. Since there is a complex data import business, in the Service business layer, you should also write a complex data saving function:

    java

    copy code

    public interface UserService { // 单条插入 void saveUserDto(UserDto dto); // 批量插入 void saveUserDtoBatch(List<UserDto> dtoList); }
    java

    copy code

    @Service public class UserServiceImpl implements UserService { @Autowired private UserMapper userMapper; @Autowired private RoleService roleService; @Autowired private UserRoleRelationService relationService; // 事务 @Transactional(rollbackFor = Exception.class) @Override public void saveUserDto(UserDto dto) { // 保存用户 User user = userMapper.save(dto.getUser()); // 保存角色 Role role = roleService.save(dto.getRole); // 保存关联 UserRoleRelation relation = new UserRoleRelation(); relation.setUserId(user.getId()); relation.setRoleId(role.getId()); relationService.save(relation); } // 批量插入代码省略,原理同上 void saveUserDtoBatch(List<UserDto> dtoList); }
  3. Each row of data read through EasyExcel can be converted into a UserDto object, and then the data can be saved in a single or in batches. During this period, there are many points that can be optimized and considered, such as:

    • Batches are more efficient and performant than single storage, but it is not easy to identify some failed rows in batches
    • The number of batch saves should not be too large, and the performance of the system and database should be considered. For example, a save is performed every time 500 rows are read.
    • The saved progress display, first obtain the total number of rows in excel, then calculate the progress according to the current number of read rows, and return it to the front end
    • If the import time is too long, it can be made into a background task. As for the front-end reminder, it can be polling or WebSocket

So you need to specify the query method, which has been given above the configuration item.

Summary of configuration items

Finally, a general configuration item is given for reference:

Export data configuration

 
 

json

copy code

{ "filename": "用户数据导出", "serviceClazz": "com.cc.service.UserService", "methodName": "listByCondition", "countMethodName": "countByCondition", "searchParams": [ { "nickname": "cc" // 搜索昵称为cc的用户 } ], "customHeads": [ { "fieldName": "username", "fieldNameZh": "账号", "width": 20, // 列宽 "fontSize": 20 // 字体大小 }, { "fieldName": "createTime", "fieldNameZh": "注册时间", "type": "date(yyyy-MM-dd)" // 属性类型声明为date,并且转换成指定格式导出 } ] }

Import template configuration

 
 

json

copy code

{ "filename": "用户数据导入", "modelClazz": "com.cc.entity.User", "customHeads": [ { "fieldName": "username", "fieldNameZh": "账号", "width": 20, // 列宽 "fontSize": 20 // 字体大小 }, { "fieldName": "createTime", "fieldNameZh": "注册时间", "type": "date(yyyy-MM-dd)" // 属性类型声明为date,并且转换成指定格式导出 } ] }

Import data configuration

 
 

json

copy code

{ "modelClazz": "com.cc.entity.User", "serviceClazz": "com.cc.service.UserService", "methodName": "save", "customHeads": [ { "fieldName": "username", "fieldNameZh": "账号", }, { "fieldName": "createTime", "fieldNameZh": "注册时间", "type": "date(yyyy-MM-dd)" // 属性类型声明为date,并且转换成指定格式导出 } ] }

Guess you like

Origin blog.csdn.net/BASK2312/article/details/131394721