MySQL performance optimization: query the last ten pieces of data in a single table 1400w (time-consuming 0.036s)

After investigating the problem, it was found that there is a table in MySQL with a data volume of 1.4kw. After observation, the daily increment is about 40-50w.
insert image description here

Suddenly think of a scene, what if I want to take the last ten pieces of this 1.4kw data (this is called deep paging)

In line with the mentality of science students who have problems to solve problems, and no problems to create problems to solve

performance squeeze
insert image description here

Let’s first look at regular queries (to be fair, all use select *)
insert image description here
My God, it took 19s. If this is combined with some complex queries, it can’t be 30+s. Who can stand it?

Analyze the reasons

Don't think about it, this sql is definitely not indexed.
insert image description here
Looking at the type, ALL represents a full table scan.

try to optimize

Since there is no index, I will let you use the index
. Looking at the structure of the table below, there is only one field that has been indexed. I will not say which field. For the secret, I will use the primary key directly.

The select * does not use the index, and the
insert image description here
time for select id has been shortened a lot, but I want all fields, it is impossible to check only the id

So
insert image description here
this is not right, I first check the id, and then use the id to get other fields, reliable~

Up to now, the original 19s sql has been shortened to 3s, nice

Looking at the execution plan, we have to know why it has become faster.
insert image description here
Looking at a few key fields, type, key, extra, it is not perfect, but it is still okay. After all, we are non-DBA players with limited SQL capabilities.

By the way, let's popularize this execution plan, look at the id column, 1 1 2, the execution order is the third row, the first row, the second row, remember the formula: the id is different, go first, the id is the same, from top to bottom,
so the first row The ALL in the type column does not mean that all scans have been performed, but that all the result sets in the subquery have been scanned, which is very reasonable.

For the detailed execution plan, go out and turn right to the mysql execution plan explain attribute analysis

14 million data deep paging can take 3 seconds, which is not bad, but can we continue to optimize?

can

Teach you a unique skill, the boss will be excited to slap the wheelchair
insert image description here
when you see it

Although the performance is fierce, the limitations are too large. First of all, you have to increase the id, and the id must not have faults. This requires high maintenance of the table. Every time you delete data, you have to clear the old id. , if the volume is large, honestly es, ck, or let your users accept the query slower

Summarize

//常规分页
SELECT * FROM table_name limit 14000000,10//耗时19.426s

//先查id ,写法很多,看个人习惯
SELECT * FROM table_name a,(SELECT id FROM table_name limit 14000000,10) b WHERE a.id = b.id  //耗时3.068

//如果你的表有自增id(并且没断层),就这么写,效率直接起飞
SELECT * FROM table_name WHERE id>14000000 LIMIT 10  //耗时0.036

end
_


ok i'm done

Guess you like

Origin blog.csdn.net/qq_33709582/article/details/121619889