Data relocation services

Service Introduction
The goal of data relocation is to complete the rapid and orderly migration of data between two storage devices within the minimum storage interruption service time, and to ensure data integrity, availability, and consistency.
The
main use scenarios that customers face are: setting up a test environment based on a formal system, copying from the internal network to the external network, and upgrading the database server hardware. Depending on the amount of data to be migrated and the system architecture, different migration methods can be adopted.
Service characteristics
Data verification 
Data verification uses the method of data grading and filtering. Data grading and filtering is to classify the data into different intermediate databases according to different data levels. In this system, we divide the data into three levels: discarded data, data to be adjusted, and convertible data. Abandoned data is data that the existence of this part of the data causes waste of system resources and will affect the future operation of the system, such as repeated personal basic information, repeated account information, and repeated payment information. The data to be adjusted is that this part of data seriously affects the operation of the new system, and it must be manually adjusted before data conversion. Convertible data means that this part of the data does not require any processing, basically meets the requirements of data conversion or the new system of this part of the data is recommended to adjust, but does not affect the operation of the system, you can wait for the new system to be adjusted after the operation, so that data conversion Work saves a lot of time. 
Data sorting 
Data sorting is to sort the original system data into data that can be recognized by the system conversion program. Data sorting is roughly divided into two stages: the first stage is to collect and backup different types of source data into a unified database; the second stage is to organize the original data and classify them into different intermediate databases as required to provide intermediate for data conversion data.
To ensure the integrity of the original data
between data collation, we first need to back up the original collected data. There are two purposes for backup: one is to unify the database, which is convenient for data conversion, and the other is to provide a reference basis for tracing the source of data in the future.
With tools related to data collation
Data sorting is very arduous and involves a large amount of data, which is impossible to complete through manual inspection, so it is necessary to write relevant data sorting tools to complete the data sorting. Including data collation tools and data correction tools. The data sorting tool is responsible for classifying the data in the original backup database into different intermediate databases; the data correction tool is responsible for providing a friendly and convenient tool interface for the relevant personnel of the user side to improve and correct the wrong data.
Using the intermediate library as a bridge
Because the database structure of the original system and the new system may be different, the use of the intermediate library as an important bridge to connect the data of the old and new systems is very important for establishing a comparison between the new and old systems. Once the business personnel have doubts about a certain conversion data in the new system, they can find out the original data through the association of the intermediate library.
UCACHE Disaster Recovery Cloud is a cloud service product that can help you easily realize server data migration. It can meet all the application scenarios you require, meet the data level of public cloud, virtual environment, physical environment, and private cloud and hybrid cloud. Application-level scheduled backup, differential backup, and selective content recovery tasks.
UCache disaster recovery cloud complex IT environment applications, such as: supporting data backup / recovery and data migration in scenarios such as SAP HANA, Hadoop, KVM, Mysql, VMware, IBM DP2, Oracle, etc. Support the lightning recovery of massive core business data, greatly shorten the time required for recovery, and ensure that the user's availability data reaches the second-level RTO and minute-level RTO experience.

Guess you like

Origin blog.51cto.com/14787952/2487431