Historical data table migration scheme [based on Oracle stored procedures]

Update resume

  • [Update Date: 20200530]: Stored procedures are the Internet to find information on the original link , from the code point of view seems to principle no problem, but there may be some small local anomalies. Let me revise it so that I can make a comment.

Problem Description

There is an Oracle-based business system on the customer site. Their database version is Oracle 10g R2 (10.2.0.5) Standard Edition, and the data volume exceeds 1 TB.

There are currently three problems:

  1. The amount of data continues to grow, reaching 3GB or more per day;
  2. The functions of the business system involving reports and queries are extremely slow, and even unable to query data;
  3. RMAN backup pressure is very heavy, abnormal recovery cannot meet RTO requirements;

problem analysis

This time the database has a huge amount of data, and developers receive feedback from the scene and create indexes when the performance is slow. However, under this data scale, the index does not have much effect but will affect the efficiency of addition, deletion and modification. Without upgrading the existing database to use the partition table function for the enterprise version, then we must establish a historical database to regularly unload the data of this library to the historical database. The history database can implement operations such as annual and monthly sub-tables to reduce the data volume of a single table for tables with a large amount of data.

In this way, the local library will remain within the controllable range, the abnormality of the historical library will not affect the production line, and the repair time and pressure can also be reduced.
Insert picture description here

Problem handling

Use PL/SQL stored procedure periodic tasks to extract data from the local library to the historical database and delete the uninstalled data from the local library. The general process is as follows:
Insert picture description here

  • Build test data
# 创建测试表
create table operate_log(str01 varchar2(50),cdate varchar2(20));

# 插入测试数据 (将此设置到脚本并设定到计划任务执行,比如30秒执行一次。)
begin
for i in 1..100 loop
INSERT INTO operate_log VALUES(dbms_random.string('x', 20),to_char(SYSDATE,'yyyy-mm-dd hh24:mi:ss'));
end loop;
commit;
end;
/

# 创建测试索引
create index idx_operate_log on operate_log(cdate);
  • Stored procedure
    In view of the customer confidentiality agreement, this environment script cannot be provided. Only use online scripts to modify the presentation.
    Script description: The script is divided by month, and there are some differences from the diagram above.
CREATE OR REPLACE 
procedure operate_log_proc(return_code OUT VARCHAR2,return_msg  OUT VARCHAR2) 
authid current_user
is
  err_index           NUMBER;-- 错误定位索引
  table_name          VARCHAR2(20);-- 源分表名称
  log_table_name      VARCHAR2(20);-- 目标分表名称
  current_month_start DATE;
  create_table_cursor NUMBER(10);
  create_table_sql    VARCHAR2(1000);
  insert_data_sql     VARCHAR2(1000);
  delete_data_sql     VARCHAR2(1000);
  v_count 						NUMBER(10);
begin
	table_name  := 'OPERATE_LOG';
  return_msg  := '执行[OPERATE_LOG_PROC]成功';
  return_code := '1';
 
  err_index := 1;
 
  -- 生成分表的表名
  SELECT 'OPERATE_LOG_' ||
         TO_CHAR(ADD_MONTHS(TRUNC(SYSDATE, 'MM'), -1), 'yyyymm')
    INTO log_table_name
    FROM DUAL;
  
  -- 表存在不用创建
	select count(1) into v_count from user_tables where table_name = log_table_name;
  
	if v_count <= 0 THEN
		-- 打开游标
		create_table_cursor := DBMS_SQL.OPEN_CURSOR;
		-- 拼接创建表的SQL语句,并执行
		create_table_sql := 'CREATE TABLE "' || log_table_name || '" (
		"STR01" VARCHAR2(50 BYTE) NOT NULL ,
		"CDATE" VARCHAR2(50 BYTE) NOT NULL
		)';
		DBMS_SQL.PARSE(create_table_cursor, create_table_sql, DBMS_SQL.V7);
		DBMS_SQL.CLOSE_CURSOR(create_table_cursor);
	end if; 
  err_index := 2;
	-- 当前月起始时间,如2020-05-01 00:00:00
  SELECT TRUNC(SYSDATE, 'MM') INTO current_month_start FROM DUAL;
  -- 将Operate_Log表中上月记录添加到新创建的表中
  
  insert_data_sql := 'INSERT INTO ' || log_table_name || ' (SELECT * FROM ' || table_name || ' WHERE CDATE < ''' || current_month_start || ''')';
  dbms_output.put_line('Insert SQL: '||insert_data_sql);
  EXECUTE IMMEDIATE insert_data_sql;
  err_index := 3;
  -- 从Operate_Log表中删除上月记录
  delete_data_sql := 'DELETE FROM ' || table_name || ' WHERE CDATE < ''' || current_month_start || '''';
  dbms_output.put_line('Delete SQL: '||delete_data_sql);
  EXECUTE IMMEDIATE delete_data_sql;
  err_index := 4;
  COMMIT;
EXCEPTION
  WHEN OTHERS THEN
    return_msg  := '过程[OPERATE_LOG_PROC]出错,' || '第[' || err_index || ']块语句出错';
    return_code := '0';
    ROLLBACK;
end operate_log_proc;
/
  • Scheduled tasks to be executed regularly

Execution time: 0:30 on the 1st of each month.
Insert picture description here
Insufficiency of online scripts:

  1. Without operation log records, it cannot be associated with other systems to output operation results.
  2. There is no implementation in batches, and the amount of free and busy monthly history data is different. The performance of a one-time operation of 10W line records is completely different from that of a one-time operation of 1000W line records.

common problem

  1. After the historical data is migrated, how does the application query?
    The application program needs to be modified according to the sub-meter situation. Use the query interval of the interface to determine whether the SQL issued by the application spans two databases. If the query interval does not exceed the Fanku retention period, execute SQL plan 1, and if the query interval exceeds the Fanku retention period, execute SQL plan 2. The query interval does not include this The database retention period executes SQL scheme three.
完整数据 = 生产数据 + 历史数据

Guess you like

Origin blog.csdn.net/weixin_38623994/article/details/106408434