Heterogeneous Database 10T Data Migration Solution

1. Migrate from DB2 to ORACLE, same as table 1 billion migration plan
1. Choose an intermediate storage server, and install db2 client and oracle client at the same time. There is no intermediate server in this test, and the generated files are directly generated on the db2 database server , the import file is also imported directly on the oracle database. The generation uses the export command of db2, and the file is generated by querying and compressing. The import uses the sqlldr command of oracle, and imports while compressing. The reason is that the space that production gives you as intermediate temporary storage is only a few hundred gigabytes, which must be compressed for 10T data. Another reason is that network transmission is reduced and parallel processing is performed. Look directly at the code:
insert image description here
insert image description here
insert image description here
the above code is to export and generate a file command, use the pipeline to do real-time compression, and use the blocking feature of the pipeline file descriptor to control the number of parallel threads. There is an optimization here that each thread occupies a pipeline, and it is necessary to create new pipelines and delete pipelines continuously. It can be made into a stack. After the thread is used up, it is put into the stack, and when the pipeline is used, it is taken from the stack. 10 T of data needs to control the query conditions, how much data is processed by each thread, for example, control the processing of 10 million data in one thread, as for how many threads are used for parallel processing, it depends on the disk read and write capabilities of the server.
The following is the imported code:
insert image description here
insert image description here
As in the above code, I control it as 5 threads, here the sqlldr connection database print information can be printed to a file.

Guess you like

Origin blog.csdn.net/u013326684/article/details/103074254