MySQL limit performance analysis

limit usage

When we use the query, often to return to the previous few intermediate or a few lines of data, this time how to do it? Do not worry, mysql already provides such a function for us.

* The FROM Table the LIMIT the SELECT [offset,] rows | `` rows the OFFSET offset
(offset the LIMIT, `length`)
the SELECT
*
the FROM Table
WHERE condition1 = 0
and 0 = condition2 The
and condition3 = -1
and -1 = condition4
Order ID ASC by
50000 2000 the OFFSET the LIMIT
the LIMIT clause can be used to force the SELECT statement returns the specified number of records.
LIMIT takes one or two numeric parameters, parameters must be an integer constant.
Given two parameters, the first parameter specifies a first offset rows returned,
the second argument specifies the maximum number of rows returned. Offset 0 is the initial rows (instead of 1):
For compatibility with PostgreSQL, MySQL also supports the syntax: LIMIT # OFFSET #. mysql> SELECT * FROM table LIMIT 5,10 ; // retrieve rows 6-15

In order to retrieve all rows from one end of the record set to offset can be specified as the second parameter -1: MySQL> the SELECT * the LIMIT the FROM Table 95, -1; // retrieve rows 96- last.

If only a given parameter, which represents the maximum number of rows returned:
MySQL> Table the LIMIT the SELECT * the FROM. 5; // before retrieving five rows
other words, LIMIT n is equivalent to LIMIT 0, n. . 1 2 . 3 . 4 . 5 . 6 . 7 . 8 . 9 10 . 11 12 is 13 is 14 15 16 . 17 18 is . 19 20 is 21 is 22 is 23 is 24 25 26 is 27 profiling paging query statement Mysql




























MySql paging sql statement, if the TOP syntax and MSSQL as compared to MySQL's LIMIT syntax so elegant to look a lot. Use it to paging is the most natural thing.

The most basic way paging:
the SELECT ... the FROM ... the WHERE ... the ORDER BY ... LIMIT ...

In the case of small amounts of data, such as SQL, sufficient, and the only problem is to make sure to note the use of the index:
For example, if the actual SQL statement like the following, then category_id, id build on two composite index comparison good:
the SELECT * Articles the FROM 123 the ORDER BY the WHERE ID = category_id the LIMIT 50, 10

Pagination subquery:
With the amount of data increases, more and more pages, SQL few pages after viewing it might look like:
the SELECT * the FROM Articles the WHERE category_id the ORDER BY the above mentioned id = 123 LIMIT 10000, 10

In general, the longer it is, the paging offset LIMIT statement will be greater, the speed will be significantly slower.
At this point, we can improve paging efficiency by way of sub-queries, as follows:

SELECT * FROM articles WHERE id >=
(SELECT id FROM articles WHERE category_id = 123 ORDER BY id LIMIT 10000, 1) LIMIT 10


JOIN分页方式
SELECT * FROM `content` AS t1
JOIN (SELECT id FROM `content` ORDER BY id desc LIMIT ".($page-1)*$pagesize.", 1) AS t2
WHERE t1.id <= t2.id ORDER BY t1.id desc LIMIT $pagesize;

After my test, join paged and paged sub query efficiency on a basic level, the consumption of time are basically the same.

Why is this so? Because the sub-query is done on the index, while a general inquiry done on the data file, generally speaking, the index file is much smaller than the data file, so the operation will be more efficient.
The actual pattern can use a similar strategy to deal with the way paging, such as to determine if it is less than one hundred pages, use the most basic way paging, greater than one hundred, use pagination sub-queries.
. 1
2
. 3
. 4
. 5
. 6
. 7
. 8
. 9
10
. 11
12 is
13 is
14
15
16
. 17
18 is
. 19
20 is
21 is
22 is
23 is
24
25
26 is
27
to mysql table has a large amount of data, the use of very serious performance problems exist LIMIT tab.

30 records from the query after the first 1000000:
the SQL Code 1: the average time of 6.6 seconds SELECT * FROM `cdb_posts` ORDER BY pid LIMIT 1000000, 30

SQL code 2: the average time of the FROM 0.6 seconds * `cdb_posts` the WHERE the SELECT PID> = (the SELECT PID the FROM
` cdb_posts` the ORDER BY the LIMIT PID 1000000,. 1) the LIMIT 30

To remove all the fields as content, large amounts of data need to cross the first block and removed, while the second substantially directly positioned according to the index field, only fetches the corresponding content through, greatly enhance the efficiency of natural. The optimization limit, the limit is not used directly, but first get to id offset, and then used directly limit size to obtain the data.

As can be seen, the next page, the offset will be larger LIMIT statement, both the speed gap will be more obvious.

Practical applications, you can use a similar strategy to deal with the way paging mode, for example, to determine if it is less than one hundred pages, use the most basic way paging, greater than one hundred, use pagination sub-queries.
Optimization idea: avoid excessive when scanning large volumes of data records

In order to ensure continuous index column index, each table can be added to a self-energizing field, and adding index

Guess you like

Origin www.cnblogs.com/liyanyan665/p/11183015.html