Are you still using AOP for operation logs?!

Preface

In the process of operating our system, when users add, delete, modify and check some important business data, we hope to record the user's operation behavior so that we can find the basis in time when a problem occurs. This log is the operation log of the business system .

In this article, we will discuss the implementation and feasibility of common operation logs

Common operation log types

  • User login log
  • Important data query log (but the e-commerce may not be important data to be buried, such as what products you search on Taobao, even if you do not buy, the homepage will recommend similar things to you for a period of time)
  • Important data change log (such as password change, permission change, data modification, etc.)
  • Data deletion log
  • ......

In summary, it is important to add, delete, modify and check the operation log according to the needs of the business.

Implementation plan comparison

Based on AOP (Aspect) traditional implementation scheme

  • Advantages: simple implementation ideas;
  • Disadvantages: increase the burden of the database, rely heavily on the front-end parameter transfer, inconvenient expansion, does not support batch operations, does not support multi-table association;

Based on the database Binlog

  • Advantages: The coupling between new and old changes in data is released, batch operation is supported, and multi-table association expansion is convenient, and does not depend on development languages;
  • Disadvantages: The design of database tables requires a unified agreement;

Scheme implementation details

1. Traditional solution based on AOP aspect + annotation

The traditional approach is the aspect + annotation method, which is not very intrusive to the code, usually records ip, business module, operation account, operation scenario, operation source, etc., generally these values ​​are obtained in the annotation + interceptor ,As shown below:

We can handle this common method in general, but in terms of data change, there has been no better way to implement it, such as how much data is before the change and how much is after the change.

Based on the set of solutions we have implemented before, the recording method based on data changes must not only agree on a template with the demand side (it is impossible to display and record hundreds of fields), but also make some agreements with the front end, such as in What is the value before modification and what is the value after modification, please see the following code:

    @Valid
    @NotNull(message = "新值不能为空")
    @UpdateNewDataOperationLog
    private T newData;

    @Valid
    @NotNull(message = "旧值不能为空")
    @UpdateOldDataOperationLog
    private T oldData;

Existing problems:

  • 1. If the old value does not query the database more than once, you need to rely on the front end to encapsulate the old value into the oldData object, which is probably not the value before modification;
  • 2. Unable to process batches of List data;
  • 3. Does not support multi-table operation;

Take another scene as an example. Before deleting, you need to record the value before deleting. Do you have to check it again~

    @PostMapping("/delete")
    @ApiOperation(value = "删除用户信息", notes = "删除用户信息")
    @DeleteOperationLog(system = SystemNameNewEnum.SYS_JMS_LMDM, module = ModuleNameNewEnum.LMDM_AUTH, table = LogBaseTableNameEnum.TABLE_USER, methodName = "detail")

Two, based on the database Binlog program

The system architecture diagram is as follows:

"Mainly divided into 3 parts:"

  • 1: The business application generates the traceid of each operation, and updates it to the business table of the operation, and sends a business message that contains information about the operator of the current operation;
  • 2: The log collection application integrates business logs and converted binlog logs, and provides external log query and search APIs;
  • 3: The log processing application uses canal to collect and parse the binlog log of the business library and deliver it to Kafka. The parsed record records the operation type of the current operation, such as the record of deletion, modification, addition, and new and old values, format as follows:
{"data":[{"id":"122158992930664499","bill_type":"1","create_time":"2020-04-2609:15:13","update_time":"2020-04-2613:45:46","version":"2","trace_id":"exclude-f04ff706673d4e98a757396efb711173"}],
"database":"yl_spmibill_8",
"es":1587879945200,
"id":17161259,
"isDdl":false,
"mysqlType":{"id":"bigint(20)",
"bill_type":"tinyint(2)",
"create_time":"timestamp",
"update_time":"timestamp",
"version":"int(11)",
"trace_id":"varchar(50)"},
"old":[{"update_time":"2020-04-2613:45:45",
"version":"1",
"trace_id":"exclude-36aef98585db4e7a98f9694c8ef28b8c"}],
"pkNames":["id"],"sql":"",
"sqlType":{"id":-5,"bill_type":-6,"create_time":93,"update_time":93,"version":4,"trace_id":12},
"table":"xxx_transfer_bill_117",
"ts":1587879945698,"type":"UPDATE"}

The operation log after processing binlon log conversion is as follows:

  {
  "id":"120716921250250776",
  "relevanceInfo":"XX0000097413282,",
  "remark":"签收财务网点编码由【】改为【380000】,
  签收网点名称由【】改为【泉州南安网点】,签收网点code由【】改为【2534104】,运单状态code由【204】改为【205】,签收财务网点名称由【】改为【福建代理区】,签收网点id由【0】改为【461】,签收标识,1是,0否由【0】改为【1】,签收时间由【null】改为【2020-04-24 21:09:47】,签收财务网点id由【0】改为【400】,",
  "traceId":"120716921250250775"
  }

Library table design

  • 1: All business system tables need to add the trace_id field, and each operation generates a random string and saves it in the business table;
  • 2: Log collection application library table design
    CREATE TABLE `table_config` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',
  `database_name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '数据库名',
  `table_name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT ' 数据库表名',
  PRIMARY KEY (`id`),
  UNIQUE KEY `unq_data_name_table_name` (`database_name`,`table_name`) USING BTREE COMMENT '数据库名表名联合索引'
) ENGINE=InnoDB AUTO_INCREMENT=35 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci COMMENT='数据库配置表';
CREATE TABLE `table_field_config` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT,
  `table_config_id` bigint(20) DEFAULT NULL,
  `field` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '字段 数据库',
  `field_name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '字段 中文名称',
  `enum_flag` tinyint(2) DEFAULT NULL COMMENT '是否枚举字段(1:是,0:否)',
  `relevance_flag` tinyint(2) DEFAULT NULL COMMENT '是否是关联字段(1:是,0否)',
  `sort` int(11) DEFAULT NULL COMMENT '排序',
  PRIMARY KEY (`id`),
  KEY `idx_table_config_id` (`table_config_id`) USING BTREE COMMENT '表ID索引'
) ENGINE=InnoDB AUTO_INCREMENT=2431 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci COMMENT='数据库字段配置表';
CREATE TABLE `table_field_value` (
  `id` bigint(20) NOT NULL,
  `field_config_id` bigint(20) DEFAULT NULL,
  `field_key` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT ' 枚举',
  `filed_value` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '枚举名称',
  PRIMARY KEY (`id`),
  KEY `ids_field_config_id` (`field_config_id`) USING BTREE
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci COMMENT='数据字典配置表';

effect

Based on binlog to realize the future plan of the program

  1. Optimize the realization of sending business messages, and use aspect interception to reduce the intrusion of business code;
  2. Currently, it does not support multi-table association operation log records, and needs to be expanded;

Guess you like

Origin blog.csdn.net/baidu_39322753/article/details/106079759