MySQL single table one million data records page performance optimization

background:

 

Own a website, since the data recorded up to a single table one million, resulting in very slow data access, analysis, background Google regular report timeout, especially large pages page is not slow.

 

test environment:

 

Let us be familiar with basic sql statement, the next we are going to see a test of basic information table

 

use infomation_schema
SELECT * FROM TABLES WHERE TABLE_SCHEMA = ‘dbname’ AND TABLE_NAME = ‘product’

 

search result:

 

 

From the figure above we can see the basic information table:

 

Table rows: 866 633
average data length of each line: 5133 bytes
single table Size: 4,448,700,632 bytes

 

About rows and tables are byte-sized units, we can know through computing

 

Average line length: about 5k
single table Total size: 4.1g
types table field has varchar, datetime, text, etc., id primary key field

 

Test test

 

1. Direct with limit start, count page statement, which I used in the method of the program:

 

select * from product limit start, count

 

When the start page is small, there is no query performance issues, we look respectively from 10, 100, 1000, 10000 execution time to start paging (page take 20), as follows:

 

select * from product limit 10, 20   0.016

select * from product limit 100, 20   0.016

select * from product limit 1000, 20   0.047

select * from product limit 10000, 20   0.094

 

We have seen that with the increase of the initial recording, time also with the increase, indicating pagination statement limit with the starting page number is a great relationship, then we start recording the change 40w facie (ie record general about)

 

select * from product limit 400000, 20   3.229秒

 

We take a look at the last recording time

 

select * from product limit 866613, 20   37.44秒

 

No wonder the search engines crawl the page we often report a timeout, as this page's largest PAGE clear that this time is intolerable.

 

From which we can summarize two things:


1) is proportional to the position statement of the query time limit and start recording


2) mysql the limit statement is very convenient, but many of the records of the table are not suitable for direct use.

 

2. Performance optimization problem to limit pagination

 

Covering the use of the index table to accelerate paging query

 

We all know that the use of the index query if the statement contains only the index column (covering indexes), then this situation will soon queries.

 

Because the use of the index Finding optimization algorithms and index data in the query above, do not have to go to address the relevant data, this saves a lot of time. In addition Mysql is also related to the index cache, at a time of high concurrency better use of caching effects.

 

In our example, we know that the id field is the primary key, naturally contains the default primary key index. Now let's see how to use a covering index query results:

 

The last page of the query data (covering the use of the index contains only the id column), between us as follows:

 

select id from product limit 866613, 20 0.2秒

 

With respect to query all the columns of 37.44 seconds to improve by about 100 times the speed of

 

So if we have to query all the columns, there are two ways, one is the id> = form, and the other is to use join, look at the actual situation:

 

SELECT * FROM product WHERE ID > =(select id from product limit 866613, 1) limit 20

 

Query time is 0.2 seconds, and is simply a qualitative leap ah, ha ha

 

Another way

 

SELECT * FROM product a JOIN (select id from product limit 866613, 20) b ON a.ID = b.id

 

Query time is very short, praise!

 

In fact both a principle with the place, so the effect is the same.

Released eight original articles · won praise 13 · views 60000 +

Guess you like

Origin blog.csdn.net/qq_37655695/article/details/60776668