Because the company is doing SMS business, with the increase of the amount of data, the daily increase of data is nearly 200w +, and the data is all stored in Mongodb. The total amount of data is nearly 320 million. The business query is slow, so I decided to store the data from a week ago Optimize it in Mysql.
Internal company plan
- Option 1 : Use Tidb for query optimization
Advantages: support high concurrency, high availability, support unlimited horizontal expansion.
See details: ( https://pingcap.com/docs-cn/dev/key-features/#%e6%b0%b4%e5%b9%b3%e6%89%a9%e5%b1%95 ).
When the amount of data is small, it is not obvious. There are obvious optimization queries at the 10 million level, but large-page paging queries will still be relatively slow.
Disadvantages: high maintenance costs, high server configuration requirements
2. Solution 2 : Oracle uses "materialized view" for optimization
For details, see: https://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_2001.htm
3. Solution 2 : Mysql adopts horizontal sub-table and vertical sub-table optimization (take the program)
Advantages: low maintenance cost, low threshold for everyone to maintain, if you need to transfer later, you can directly transfer Mysql data to Tidb
Business: Transfer the data from Mongodb one week ago to Mysql, nearly 300w + data volume per day, and 1000W + data volume per day during the double eleven peak period, to Mysql.
Optimization of MyISAM level table: https://blog.csdn.net/qq_31150503/article/details/105450236
Partition optimization: https://blog.csdn.net/qq_31150503/article/details/105451273
solution:
Use scheduled tasks to query, migrate and save data in Mongodb to Mysql every day
Implementation plan:
https://blog.csdn.net/qq_31150503/article/details/105451876
Storage scheme:
https://blog.csdn.net/qq_31150503/article/details/105451273