Article Directory
Several factors affect the performance of MySQL
In general, the following points
Slow SQL (focus), the host of hardware resources (CPU, memory, disk I / O, etc.), LAN traffic, etc.
-
Ultra-high QPS and TPS
QPS (Queries Per Second queries handled per second): Suppose a SQL processing needs 10ms, 1s will handle up to 100, then the QPS <= 100, if dealing with a 100ms it? That QPS <= 10, can be inferred that affect SQL execution efficiency QPS team is very important. Based on experience,
TPS (Transactions Per Second, the number of transactions / sec, including the complete user transaction request server, the internal server processing, the server returns information to the user during three)
High QPS and TPS, indicating a higher load applications.
MySQL database and calculation methods for TPS and QPS
Questions = SHOW GLOBAL STATUS LIKE 'Questions'; Uptime = SHOW GLOBAL STATUS LIKE 'Uptime'; QPS=Questions/Uptime ----------------------- Com_commit = SHOW GLOBAL STATUS LIKE 'Com_commit'; Com_rollback = SHOW GLOBAL STATUS LIKE 'Com_rollback'; Uptime = SHOW GLOBAL STATUS LIKE 'Uptime'; TPS=(Com_commit + Com_rollback)/Uptime
-
High concurrency and high CPU utilization
High concurrency -> the chance of database connection pool is used up significant increase (max_connections default 100), more than words, you will see the error of a subclass 500
High CPU usage -> slow to respond, and even lead to downtime
-
Disk I / O
Disk I / O performance suddenly dropped -> Use a faster disk device
Execution Time> predictable peak period, the task of adjustment programs - other consume a lot of disk performance scheduled tasks, etc.
-
NIC traffic
For example, we often say that Gigabit Ethernet, Gigabit here is actually a small b, 1Byte = 8 bit. Bit small b Byte Big B
1000Mb / 8 is approximately equal to 100MB (we are familiar bandwidth)
NIC is an increased risk of filled, occupied, then certainly not visit the database, and how to avoid it?
usually,
1. Reduce the number of slave nodes, avoid a lot of duplication, bandwidth
2. rational use of multi-level cache, to avoid a large number of cache invalidation request to DB
3. Avoid using select * queries
3. separation of business networks and network servers, etc.
Risky brings to the table
Shajiao large table? Rough definition, from two dimensions to consider, for reference only
- More than 10 million records
- Table huge data files over 10G
The risks
-
Impact on query
For example: from super giant data, find data discrimination is not high, will lead to a lot of disk I / O, may lead to death hang database, resulting in a large number of slow queries, special attention needs to be resolved.
-
The impact of DDL
Indexing takes a particularly long, the risk: MySQL 5.5 the previous version, will be indexed lock table. 5.5 Although later versions do not cause the lock table, but will cause a delay from the master.
Modified table structure, it takes a long time lock table , risks: 1. Delay 2. Main affect normal operation of the data from
How to handle it?
1. The sub-library sub-table (table's primary key points on how to select, how the sub-table queries across partitions and statistics to solve) carefully! ! !
2. historical data archiving (archiving point of timing, how to efficiently archive)
Risk large transaction brings
Services: ACID
Atomicity atomicity | consistency consistency | isolation, isolation | persistent durability