mysql notes

1. tinyint can only store 1 byte (signed -127~+128, unsigned 0~255), the 2 in the brackets in tinyint<2> means that the number stored is less than two digits, and 0 is filled at the top (such as: 1 -> 01)

2. When a certain field in the table stores a large amount of data, a table should be separated by the primary key mapping relationship (the data in a table is too large, which will affect the retrieval speed), this vertical separation (read and write separation) ) can improve the speed.

3. It is not easy to have too many data in a table, and it should be separated horizontally. For example, the QQ number of TX will be divided into tables according to the city. When querying, you only need to query the table corresponding to the city according to the actual business, and No need to search large tables to spend huge time

4. When an int data is too large to fit BIGINT, it should be stored in varcha, and the data is taken out in segments and applied with a special algorithm to apply large values

​​5. The maximum number of MySQL connections (max_connections=100) The default is 100, and the actual environment should be changed (generally 1000), which can almost support the maximum concurrency limit of 2000 (due to the static page cache, it can support 10W people online)... Like query_cache_size (the query cache is adjusted according to the situation and affects the query speed)

6. Innodb adjustment parameter
                       innodb_additional_mem_pool_size = 64M
                     innodb_buffer_pool_size =1G

7. Myisam adjustment parameter
                       key_buffer_size is generally adjusted to 10 times the original, which has obvious effect

8. MySQL incremental backup will record (SQL[dml,create], operation time, operation location).. Add log-bin=D:/mylog under [mysqld] in my.ini to enable incremental backup (restart MySQL ).MySQL will generate an index file on the d disk (which files are recorded) and the default first file 000001 file (store operations on the database)

9. You can view the incremental backup file (recorded inside) through mysqlbinlog D:/mylog.000001

10. Through mysqlbinlog --start-datetime="2017-02-28 22:37:20 " d:/mylog.000001 | mysql -uroot - p to restore (all operations after the point in time will be restored)
--stop-position="123" (all data from the beginning to position 123 will be restored)
--start-datetime="" --stop-datetime=" " (The interval from start to stop will be restored)
When the data is catastrophically damaged, the full backup is generally restored first, and the restoration is carried out according to the incremental backup time point.

11. Reset master can be used; Configure

-EXPIRE_LOGS_DAYS = 7 under [mysqld] to clear it every 7 days ), compound index (two or more columns as an index)

13. The design of the table should meet 3F (atomicity, uniqueness, no redundancy) as much as possible, and the specificity should be determined according to the actual business situation.

14. SHOW STATUS; you can view the database status list. show [global] status like 'commands in the status list' ; You can view the specific status (eg: show status like 'uptime' to see how long MySQL has been started).
The default is the current thread, and after adding global, it is the entire application range.

15. show status like 'slow_queries'; you can view less than 10 Seconds of slow query (MySQL default below 10 seconds is slow query). showstatus like 'Long_query_time'; you can view the slow query time (default 10 seconds), which can be modified by
set [global] long_query_time=1;... By default MySQL will not log slow queries, you can enter mysqld --safe-mode --slow-query-log through cmd; turn it on. Use mysqld -log-slow-queries=d:/slow.log; specify the slow query save log path ( The path stored in datadir by default)

16. You can use explain to analyze a query statement to optimize slow queries

17. Use select * from table where match(field,field...) against(words) for full-text indexing; you can use select match(field ,field...) against(words) from table; show hit rate.tips: full-text indexing will only index rare words (just think about it), In addition, MySQL's own full-text index does not support Chinese

18. Unique index can store null repeatedly

19. Index considerations: a. Fields that are updated frequently should not be indexed. b. Fields with small differences should not be indexed c, which will have a negative impact on the execution of

dml . :like The left side of the condition is a wildcard (like '% condition' or like '_condition')
In the combined index, the leftmost condition is not used (add index myIndex(column1,column2)-> select from table where column2='xxx')
If there is an or in the condition, then the conditions on both sides of the or must be valid under their separate conditions to use the index (such as the above column: where column1='aaa' or column2='bbb' This will not use the index, because in the In separate conditions, the index of column2 will be ignored)
The column type is string, and the index condition needs to be wrapped in quotation marks (varcha column -> where column=123 will not use the index, the correct use of the index should be where column='123' )

21. The group by statement will sort the results by default. Can keep up with order by null to prevent

22. myisam and innodb: batch insert (high, low), transaction support (no, yes), full text index (yes, no), Locking mechanism (table, row), btree index support (yes, yes), hash index (no, yes), clustered index (no, yes), data cache (no, yes), data compressible (yes, no), Space usage (low, high), memory usage (low, high), foreign key support (no, yes)

23. If the engine is myisam, it must be regularly defragmented (myisam will leave a lot of fragments even after deleting data). You can use the optimize table xxx; command to defragment the xxx table.

24. mysqldump -u root -proot database [table1 , table2] > file path; the command can perform database backup. source path; can be restored according to the backup file.

25. You can write the backup command into the bat file, and then add a scheduled task for automatic backup. Note: mysqldump command in bat The full path must be used, and the environment variable is invalid. If there are spaces in the path, use double quotation marks (c:\mt work\mysql\bin\mysqldump -u root... -> "c:\mt work\mysql\bin\ mysqldump" -u root...)

26. For batch insertion of too much data, stored procedures should be used. The stored procedures will not return execution results (silent execution), and the efficiency is higher than ordinary. Before inserting, MySQL's auto-commit transaction should be closed, and the transaction will be manually controlled after the last statement inserted.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326373424&siteId=291194637