Which lasted seven days, the strongest in the history MySQL optimization summary, from optimizing So Easy!

Which lasted seven days, the strongest in the history MySQL optimization summary, from optimizing So Easy!

I. Overview

1. Why optimization

  • Applications are often a bottleneck in the throughput of the processing speed of the database
  • With the use of the application, the database data is gradually increased, the pressure increases the database processing
  • Relational database data is stored on the disk, access speed is slower (compared to the data in the memory)

2. How to optimize

  • Design phase table, field, consider better storage and computing
  • Optimization of the database itself provides, such as index
  • Scale, is separated from the master copy, write, load balancing and high availability
  • A typical SQL statement optimization (with little success)

Second, the design field

1. Typical program

①. There are requirements for accuracy

  • decimal
  • Decimal to Integer

②. Try to use integer string (IP)

  • inet_ aton("ip' )
  • inet_ ntoa(num)

③. Whenever possible, use not null

  • Numerical calculation is more complex logic nuI

④. Fixed-length and fixed-length non-selection

  • Can be used longer digital data decimal
  • char length is fixed (over content length has been cut), varchar non-fixed length, text storing an extra length of the content stored and saved varchar occupied space length of data

⑤. The number of fields not too many Notes field is necessary, see their name means field name, the field can be reserved for future expansion

2. paradigm

① The first paradigm: atomicity segment (relational database columns have read, it is in line with the acquiescence)

. ② The second paradigm: the elimination of part dependent on the primary key (as may be more than one primary key); use of a service unrelated to the primary key field

③ The third paradigm: the elimination of the dependence on primary transfer bonds; high cohesion, as trade merchandise list can be divided schematically listings information table and the table two tables.

Third, the choice of storage engine (MyISAM and Innodb)

1. functional differences

Innodb support transactions, row-level locking, foreign key

2. Storage Differences

① storage:. MyISAM data and index bow | are stored separately (.MYI.MYD), while Innodb exists (.frm) together

② mobility table: by moving the table and the corresponding MYI MYD movable table can be achieved, and there is an additional associated file Innodb

③ space debris: fragments will have space (the space occupied by the file table) when MyISAM delete data, the need for regular optimization optimizetable table-name manually. The Innodb not.

④ ordered store: primary key according to an ordered insertion Innodb inserted data. Thus the data in the primary key table default order (writing time consuming because of the need to find the insertion point b + tree but find efficient)

3. Select the difference

. ① Reading and writing less MyISAM: news, blog site

. ② Reading and writing are also multi-purpose Innodb:

  • Support Services / foreign key, to ensure data - consistency, completeness
  • Strong concurrency (Line Lock)

Fourth, the index

1. What is the index

With an identification of keywords extracted from the data, and the mapping relation to the corresponding data

2. Type

① primary key index primary key:. Requirements to be unique and not null keywords

. ② the general index key: in line with the index only in accordance with the first field and orderly

③ unique index unique key:. The only requirement keywords

④. Full-text index fulltext key (does not support Chinese)

3. Management Index grammar

①. View index

  • show create table student
  • desc student

②. Indexing

  • Creating the specified, such as First. Name VARCHAR (. 1. 6), Last name (. 1. 6), Key name (First name, Last name)
  • 更改表结构:alter table student add key/unique key/primary key/ultext key key. name(first name,last name)

③. Delete Index

  • alter table student drop key key_ name
  • If you delete a primary key index, primary key and self-growth, we need to alter modify cancel the first growth since then delete

4. Implementation Plan explain

Whether to use SQL to perform the analysis of the index, the index uses what

The scene index using

  • where: If the search fields are established index, the index will be covered
  • order by: If the sort fields have an index, and the index is ordered, take data directly corresponds to the index, all data and read check out the reordering high efficiency compared to
  • join: If you join on field conditions established index, search will become effective
  • Index covering: directly on the index do find, and not to read the data

6. syntax details

Even the establishment of the index, some scenes may not necessarily use

  • ? Where id + 1 = recommendation written where id = -1, which is to ensure the cable bow |? Appears independent field
  • Do statements like fuzzy match before the keyword, ie "% keyword does not use the index, while" keyword% would use an index
  • The key index will be used when or conditions on both sides of the field are indexed, as long as one side will not do full table scan
  • State value. Such as gender status value - a key corresponding to a lot of data, consider the use of the index is lower than the full table scan efficiency

7. The index storage structure

  • btree: Multi search tree: the ordered node key, a pointer between keywords, search efficiency log (nodeSize, N), which refers to the number nodeSize a node key (depending on the length of the key and a node size)

  • b + tree: upgrade from a btree, there is a spatial data and keywords, eliminating the need for time to map data to find data stored by the keyword of

Fifth, the query cache

1. select the query cache the result, key SQL statement, value for the query results

If the SQL function the same, but only slightly more spaces or changes will result in a mismatch of key

2. The client turned

query. cache. _type
  • 0 - do not open
  • 1 - on, the default cache each select, not the cache for a sq: select sql-no-cache
  • 2- turned on by default not cache, the cache developed by select sql-cache which - a bar

3. The client set the cache size

query_ cache .size

4. Egg weight Cache

reset query cache

The cache invalidation

Day changes to the data table will cause all cached data table based on the failure (table-level management)

Sixth, the partition

1. Under a default table corresponding to a set of stored files, but a large amount of data (typically ten million level) needs to be assigned to a plurality of sets of data stored in a file, to ensure the efficiency of a single file

2. partition by partition function (partition field) (a logical partition)

  • hash- partition field to an integer
  • partition string field key-
  • range- Based on the comparison, only supports less than
  • based on the state value list-

3. Partition Manager

  • When you create a partition: create table article0 partition by key (title) partitions 10
  • Modified table structure: alter table article add partition (Logical Partition)

4. Partition field inspection should select a common field elements, or little significance partition

Seven, vertical division and the horizontal division

1. Horizontal

A plurality of the same data structure table stored in the same type

A separate table to ensure the uniqueness of id

2. Vertical

Dividing the field into multiple tables, which records a corresponding relationship is

Eight cluster

1. master-slave replication

①. First, manually synchronize the slave and master it

  • stop slave
  • export data to a slave master to perform again
  • show master status with read lock记录File和Position
  • To the slave. Change master to

②. Start slave View Slave IO Running and Slave SQL _Running, we must all be YES

③. Master read and write, but can only read slave, master-slave replication fail or need to manually re-sync

④. Mysqlreplicate quickly configure master-slave replication

2. The separate read and write (based on the master copy)

①. Use original stcConecton

WriteDatabase provide write connection

ReadDatabase provide read connection

②. Sping AOP by means of data sources and to achieve dynamic switching Aspec

  • RoutingDataSourcelmpl extends AbstractRoutingDataSource, rewriting determineDatasource, poured into SqISessionFactory, and arranged defaultTargetDatasource targetDatasource (selecting a specific data source based on the return value-ref value of determineDatasource)

  • DatasourceAspect cut component configured pointcut @Pointcut aspect0 (DAO all methods of all classes), pre-arranged reinforcing @Before ( "aspect0") before ( Joinpoint point), acquired by the method name point.getSignature.getName, with the METHOD the TYPE the MAP prefix set of comparison, the write / read is set to the current thread (the thread is to be executed next DAO methods, enhance its pre-intercept down)

  • DatasourceHandler, ThreadLocal in the data source using the notification method to be used in the pre-bound to a thread of execution of the method, the method performing reacquisition in accordance with the current thread attempting to retrieve the data source

3. Load Balancing

algorithm

  • polling
  • WRR
  • According to the load

4. High Availability

Stand-alone machines to provide a redundant service

  • Heartbeat
  • Virtual IP
  • Master-slave replication

Nine, typical SQL

1. online DDL

In order to avoid long table-level locking

  • copy strategy, progressive copy, copy the old record during re-execute SQL log table
  • mysq | 5.6 online ddl, greatly reducing the lock time

2. Bulk Import

①. Disable indexes and constraints, after the establishment of a unified import

②. Avoid one by one transaction

innodb To ensure consistency, the default SQL plus for each transaction (also to be time-consuming), should be created manually before the bulk import transactions, commit the transaction manually after the import is complete.

3. limit offset,rows

Avoid rabbit larger offset (the larger the number of pages)

After the offset is used to skip data, filter data can by filtration, and then found out rather than offset by skipping

4. select *

Try to query the required fields, reduce network transmission delay (little effect)

5. order by rand()

For each data will generate a random number according to the random number last sort, the application may be used instead of the primary key to generate a random

6. limit 1

If it is determined only retrieve a piece of data, suggestions plus limit 1

Ten, slow query log

1. Locate the lower query performance SQL, targeted to do optimization

2. Configuration Item

  • Open slow_ query. Log
  • The critical time long_ query. Time

3. slow query log their own recorded exceeds the critical time of SQL, and stored at xxx-slow.log under the datadir

Eleven, Profile

1. Automatic recording time for each step in detail and specific SQL execution time spent in a SQL

2. configuration items Day

Open profiling

3. Review the logs show profiles

Time Day 4. Review the specific details of the steps it takes to SQL

show profiles for query Query_ ID

XII typical server configuration

1. max_ connections, the maximum number of client connections

The Table 2. Open Cache, table handles file cache to speed up the table to read and write files

3. key_ buffer. _Size, the index cache size

4. innodb_ buffer. Pool size, innodb buffer pool size, to achieve the various functions provided innodb

Innodb 5. File PER_ the Table, each table a ibd file, otherwise innodb shared table space

XIII pressure measurement tool MySQLSlap

1. automatic generation sq | and executed to test the performance

myqslap -a-to-generate sql -root -root

2. concurrent test

mysqlslap --auto-generate-sql --concurrency = 100 -uroot -proot, the client 100 performs analog sql

3. several rounds of testing, the average case response

mysqlslap --auto-generate-sql --concurrency = 100 --interations = 3 -uroot -proot, client 100 performs analog sql. performs three

4. Storage Engine Test

  • --engine = innodb: mysqlslap --auto-generate-sql --concurrency = 100 --interations = 3 - engine-innodb -uroot -proot, the client 100 performs simulation performed sql 3, the processing performance innodb.

  • - engine = myisam: mysqlslap - auto-generate-sql --concurrency = 100 --interations = 3 --engine-innodb -uroot -proot, the client 100 performs simulation performed sql 3, the processing performance myisam.

Guess you like

Origin blog.51cto.com/14637764/2466432