In order to better test MYSQL performance and program optimization, we have to produce massive data for testing. My method here is to directly use the uuid function to assign different contents of each piece of data.
1. First create a test table (card table)
CREATE DATABASE IF NOT EXISTS `test` DEFAULT CHARSET utf8 COLLATE utf8_general_ci; DROP TABLE IF EXISTS `card`; CREATE TABLE `card` ( `card_id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'ID', `card_number` varchar(100) DEFAULT NULL COMMENT '卡号', PRIMARY KEY (`card_id`) ) ENGINE=MyISAM AUTO_INCREMENT=0 DEFAULT CHARSET=utf8 CHECKSUM=1 DELAY_KEY_WRITE=1 ROW_FORMAT=DYNAMIC
2. Create a MYSQL stored procedure
DROP PROCEDURE IF EXISTS proc1; DELIMITER $$ SET AUTOCOMMIT = 0$$ CREATE PROCEDURE proc1() BEGIN DECLARE v_cnt DECIMAL (10) DEFAULT 0 ; dd:LOOP INSERT INTO card (card_number) VALUES (UUID()); COMMIT; SET v_cnt = v_cnt+1; IF v_cnt = 10000000 THEN LEAVE dd; END IF; END LOOP dd ; END;$$ DELIMITER ;
3. Call the stored procedure to generate the corresponding test data
call proc1;
My machine was generated in about 2 minutes and 13 seconds. Everyone's machine is different, and the generation time will also be different.
4. Let's test the performance.
select * from card order by rand() limit 1; //The query is completed in 6.5 seconds select * from card where card_number like '%xxx%'; //3.7 seconds to complete the query
In the case of such massive data, if fuzzy query is used, it will definitely be very slow. Generally, it is recommended to use a full-text search engine (such as Sphinx) instead, and the query speed will be completely solved.
You can refer to this article:
Better MySQL Search with Sphinx