MySQL Day5.1

View source image

6. Affairs

6.1. What is a transaction?

Either both succeed or both fail

Transaction principles: ACID principles: atomicity, consistency, isolation, permanence

 Reference blog: The four characteristics of mysql transactions and the four isolation levels of transactions - little python in the java world - Blog Park

1. Atomicity

Atomicity means that all operations included in a transaction either succeed or are rolled back if they fail. Therefore, if the operation of the transaction succeeds, it must be fully applied to the database. If the operation fails, it must not have any impact on the database.

Either they both succeed or they both fail.


2. Consistency

Consistency means that a transaction must transform the database from one consistency state to another. That is to say, a transaction must be in a consistency state before and after execution. For example, assuming that the total money of User A and User B is 1,000, then no matter how or how many times A and B transfer money, the sum of the money of the two users after the transaction ends should still be 1000, which is transaction consistency.

The data integrity before and after the transaction must be consistent.


3. Isolation

Isolation means that when multiple users access the database concurrently, such as when operating the same table at the same time, the transactions opened by the database for each user cannot be interfered by the operations of other transactions, and multiple concurrent transactions must be isolated from each other. Regarding transaction isolation, the database provides multiple isolation levels, which will be introduced later.

Will not be disturbed by other matters.


4. Durability

Durability means that once a transaction is committed, the changes to the data in the database are permanent, and the operation of committing the transaction will not be lost even if the database system encounters a failure. For example, when we use JDBC to operate the database, after submitting the transaction method, the user is prompted that the transaction operation is completed. When our program is executed until we see the prompt, we can conclude that the transaction has been submitted correctly. Even if there is a problem with the database at this time, we must To complete our business. Otherwise, it will cause a major error that although we see a prompt that the transaction is completed, the database does not execute the transaction due to a failure. This is not allowed.

Once a transaction is committed, it is irreversible and is persisted to the database.

Some problems caused by isolation

Dirty Reads 
Dirty Read: Transaction A reads the uncommitted data of transaction B and operates on this basis, and transaction B performs rollback, then the data read by A is dirty data.

Refers to one transaction reading uncommitted data from another transaction.

Non-repeatable Reads: 
A transaction reads the same row of data twice, but obtains different results. After transaction T1 reads certain data, transaction T2 modifies it. When transaction T1 reads the data again, it obtains a different value from the previous time.

When reading a certain row of data in a table within a transaction, the results of multiple reads will be different.

Phantom read 
means that executing the same select statement twice will produce different results. The second read will add a data row, and it does not say that the two executions are in the same transaction. In general, phantom reading should be exactly what we need. But sometimes it is not. If you open a cursor, when operating the cursor, you do not want the new records to be added to the data set hit by the cursor. The isolation level is cursor stability, which prevents phantom reads. For example: There are currently 10 employees with a salary of 1,000. Then transaction 1 reads all employees with a salary of 1,000 and gets 10 records; then transaction 2 inserts an employee record into the employee table with a salary of 1,000; then transaction 1 reads all employees with a salary of 1,000 again. 11 records were read.

It means that the data inserted by another transaction is read in one transaction, resulting in inconsistency in reading before and after.

Execute transaction

-- =============事务============
-- mysql 是默认开启事务自动提交的
set autocommit = 0  -- 关闭事务
SET autocommit = 1  -- 开启事务(默认)


-- 手动处理事务
SET autocommit = 0  -- 关闭自动提交

-- 事务开启
start transaction  -- 标记一个事物的开始,从这个之后的sql都在同一个事务中

insert  xxx
insert  xxx

-- 提交: 持久化(成功!)
commit
-- 回滚:回到原来的样子(失败!)
rollback

-- 事务结束
SET autocommit = 1  -- 开启自动提交

-- 以下为了解内容
savepoint 保存点名  -- 设置一个事务保存点
rollback  to 保存点名  -- 回滚到保存点
release savepoint 保存点名  -- 撤销保存点

Simulation scenario

-- 转账
CREATE DATABASE shop CHARACTER SET utf8 COLLATE utf8_general_ci
USE shop

CREATE TABLE `account`(
	`id` INT(4) NOT NULL AUTO_INCREMENT,
	`name` VARCHAR(30) NOT NULL  ,
	`money` DECIMAL(9,2) NOT NULL,
	PRIMARY KEY(`id`)
)ENGINE=INNODB DEFAULT CHARSET=utf8
INSERT INTO `account`(`name`,`money`) VALUES('张三','2000.00'),('李四','10000.00')

-- 模拟转账:事务
SET autocommit = 0 -- 关闭自动提交
START TRANSACTION  -- 开启一个事务

UPDATE `account` SET `money`=`money`-500 WHERE `name`='张三'  -- 张三减500
UPDATE `account` SET `money`=`money`+500 WHERE `name`='李四'  -- 李四加500
 
COMMIT   -- 成功————>提交  事务一旦提交,就被持久化了
ROLLBACK -- 失败————>回滚

SET autocommit = 1 -- 回复自动提交

7. Index

The definition of MySQL official team index is: Index (index) is a data structure that helps MySQL obtain data efficiently.

Extracting the backbone of the sentence, you can get the essence of the index: the index is a data structure

7.1. Classification of indexes

In a table, there can be only one primary key index and multiple unique indexes.

  • Primary key index (PRIMARY KEY)

    Unique identification, primary key cannot be repeated, only one column is used as the primary key

  • Unique index (UNIQUE KEY)

    To avoid duplicate columns, unique indexes can be repeated, and multiple columns can be identified as unique indexes.

  • Regular index (KEY/INDEX)

    By default, use index or key to set

  • Full text index (FULLTEXT)

    Only available under specific database engines to quickly locate the database

basic grammar

-- 索引的使用
-- 1.在创建表的时候给字段增加索引
-- 2.创建完毕后,增加索引

-- 显示所有索引信息
SHOW INDEX FROM `account`  -- 显示`account`表中所有索引信息

-- 增加一个全文索引 (索引名) 列名
ALTER TABLE school.student ADD FULLTEXT INDEX `studentName`(`studentName`)

-- EXPLAIN 分析sql执行的状况
EXPLAIN SELECT * FROM student -- 非全文索引

EXPLAIN SELECT * FROM student WHERE MATCH(studentName) AGAINST('刘')

7.2. Test index

create database `school` -- 创建一个数据库

CREATE TABLE `app_user` (-- 创建一个表
  `id` BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT,
  `name` VARCHAR(50) DEFAULT '' COMMENT '用户昵称',
  `email` VARCHAR(50) NOT NULL COMMENT '用户邮箱',
  `phone` VARCHAR(20) DEFAULT '' COMMENT '手机号',
  `gender` TINYINT(4) UNSIGNED DEFAULT '0' COMMENT '性别(0:男;1:女)',
  `password` VARCHAR(100) NOT NULL COMMENT '密码',
  `age` TINYINT(4) DEFAULT '0' COMMENT '年龄',
  `create_time` DATETIME DEFAULT CURRENT_TIMESTAMP,
  `update_time` TIMESTAMP NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  PRIMARY KEY (`id`)
) ENGINE=INNODB DEFAULT CHARSET=utf8mb4 COMMENT='app用户表'



-- 插入100万条数据
DROP FUNCTION IF EXISTS mock_data;
DELIMITER $$-- 写函数之前必写,分隔符
CREATE FUNCTION mock_data()
RETURNS INT deterministic
BEGIN
  DECLARE num INT DEFAULT 1000000;
  DECLARE i INT DEFAULT 0;
  WHILE i < num DO
   INSERT INTO app_user(`name`, `email`, `phone`, `gender`, `password`, `age`)
    VALUES(CONCAT('用户', i), '[email protected]', CONCAT('18', FLOOR(RAND()*(999999999-100000000)+100000000)),FLOOR(RAND()*2),UUID(), FLOOR(RAND()*100));
   SET i = i + 1;
  END WHILE;
  RETURN i;
END;
SELECT mock_data();


select * from app_user where `name`='用户9999'; -- 2.486 sec

-- id_表名_字段名
-- create index 索引名 on 表(字段)

create index id_app_user_name on app_user(`name`);

SELECT * FROM app_user WHERE `name`='用户9999'; --  0.003 sec

When the amount of index is small, there are not many users, but when it is big data, the difference is very obvious.

Speed ​​up queries

7.3. Indexing principles

  • The more indexes, the better
  • Tables with small data volumes do not need to be indexed
  • Do not index process change data
  • Indexes are generally added to fields commonly used for queries.

Guess you like

Origin blog.csdn.net/m0_52991090/article/details/121191918