Review your notes database

Database system features:

  • Structured data
  • Data sharing is high, low redundancy and easy expansion
  • High data independence
  • Unified data management and control database management system
    • Provides data control features include: data security protection, integrity checking, concurrency control, database recovery
      data model: internal database abstraction of the real world, is to describe the data, organizational data and data operations. The data model is the core and foundation of database systems.

Two data model in the database: conceptual model, logical and physical models
conceptual model is the first abstraction of the real world to the world of information.
The basic concept of the information world: entities, attributes, code, entity type, entity set, contact.
Representation of the conceptual model: Entity - Relationship notation (ER Model)
data model data structure, data manipulation, data integrity constraints composition.
Logical data model classification: hierarchical model, the mesh model, the relational model, object-oriented data model.
Integrity constraints model comprising: entity integrity, user-defined integrity, referential integrity.
Grass data manipulation relational model is a collection operation, operation and operating results are subject relationship.

Three mode structure of the database: an outer mode, mode, and the mode

Describe the logical structure of the data model and features of all data in the database. So that all the user's public data view. Mode is relatively stable, relatively solid changes.
External mode: mode is also called an outer sub-mode or the user mode, he is a description of the logical structure of the database and wherein the user can see. Database user view of the data, relevant data and the logical representation of an application. Mode is a subset of the external mode. An application with just a pattern.
The mode: the physical structure and the data stored in the internal database described embodiment of the data, the data organization in the internal database.
Two independent image data and database functions:
an outer mode and the image mode: When the mode changes (e.g., adding a new type of data, or a new relationship), the database administrator for the respective outer pattern image / mode corresponding change can be made outside the mode remains unchanged. Applications are written in a foreign mode, so there is no need to modify the application to ensure the independence of the logical data and programs.
The mode and the image mode: When the data storage structure is changed, the database administrator has to make a corresponding change pattern / mode the image pattern can remain unchanged, so there is no need to change the application program; and a program to ensure that the data physical independence.
In general, the external mode / video mode to ensure the independence of the data logic, pattern / image mode to ensure the independence of the physical data.
Independence of data and programs such definitions and description data can be separated from the application. Should access management data managed by the database management system, it simplifies the preparation of the application, greatly reducing maintenance and modification programs.
The composition of the database system: hardware platforms and databases, software and personnel.

Formal definition and relationship data:
domain is a collection of values having the same data type.
There are three types of relationships: base table or the base table, a lookup table, the table view.
Relationships and relational model: relations model is static, stable relationship is dynamic, changing with time, data relationships continue to operate in the operation of the database.
Basic Relationship operations: selection, projection, and the difference, Cartesian product.
Objects and results of the operation are set.
Relational model three types of integrity constraints: entity integrity, referential integrity, the user-defined integrity. Entity integrity and referential integrity constraints are the relational model that must be met. Referred to two invariance relationship.
Entity integrity rules: if the attribute of the main properties of this section A is the relation R, then A is not null value.
Referential integrity: foreign key constraints between tables must ensure data integrity.
User-defined integrity: the user to specify a query constraints, and modify the database insert operation, and the like, if the information of the database constraint is not satisfied, the user is denied to insert or modify data table.

Language non-relational data model comprising: a pattern data definition language (DDL)
outside the outer pattern data definition language DDL pattern
data storing description language DSDL related
data manipulation language DML
three basic objects mode: pattern table, view, index, assertions
a relational database management system can create multiple databases, a database can be established in a plurality of patterns, a pattern typically comprises a plurality of tables, indexes and views and other database objects.
Creation mode:
the Create Schema <schema name> authorization <user name>
Delete mode: drop schema <schema name> <cascade | the restrict>
MySQL does not seem to support cascade cascading keyword
to create a curriculum:
the CREATE TABLE Course, (
CNO int,
sno int pRIMARY kEY, // define the primary key
CNAME char (20 is) the NOT NULL,
a fOREIGN kEY (cno) the REFERENCES Course (cno); // set cno foreign keys, table reference Course
);
foreign key: when a table when a column of data must be bound by the primary key of another table, the data of the column is a foreign key.
Such as student enrollment in the table id number, primary key curriculum courses must be taken from the table. This is reflected in the referential integrity of data in the database. Here that is not very accurate! Alone mean it. A plurality of tables can define foreign key can define a primary key.
Create an index:
Benefits: When a large amount of data in the database query operation is relatively slow, indexing speed up query speed. Creation Method:
UNIQUE CLUSTER represents a unique index clustered index
CREATE [UNIQUE] [CLUSTER] INDEX < index name> the ON <table name> (<column name> [order]);
// Create a student number in ascending order according to the index
student uNIQUE iNDEX ON stuSno the cREATE (SnO);
// create a program according to an index number in ascending order of
the cREATE uNIQUE iNDEX ON couCno course, (CNO);
// create a unique index in accordance with the number of students course number in ascending descending order
cREATE uNIQUE iNDEX Scno ON SC (Sno ASC, Cno Desc);
modify the index
ALTER iNDEX <old index> RENAME TO <new index>
delete index
DROP iNDEX <index name>

Query data in the table:
use the select statement. Before Bowen there are related operations, you can refer to!
Fuzzy queries: LIKE the LIKE or the NOT 'XXX%'
the NOT the IN
the BETWEEN ... ... the AND
query results operations:

Develop DISTINCT cancel duplicate column
relevant aggregate functions:
COUNT (*) COUNT (DISTINCT <column name>)
SUM (column name)
AVG (column name)
MAX (column name)
MIN (column name)
operations on query results:
BY the ORDER
the GROUP BY
equivalent and non-equivalent connection is connected:
<table name> <column name> <connection symbol> <table name> <column name>
column name field connections, each connection connecting field conditions must be comparable of. But better than the same name.
When the multi-table connection, it may be outside, that the outer left and right outer connection
data item entries in the two tables are not equal, then, used to make external connections may be all the data items left in the table all show:
the SELECT Student .sno sname sex, age, sdept, cno, grade FROM Student LEFT OUTER JOIN SC ON (Student.sno=SC.sno) ;
can also use the uSING eliminate duplicates.
The above statement is modified from FROM Student LEFT OUTER JOIN SC USING ( sno);

Collection query:
the UNION query and set
INTERSECT intersection query
EXCEPT query set difference
insert the information into the database
INSERT
information deleting database tables
DELETE FROM ...
delete a table:
the DROP
information modification table:
the UPDATE
view: a base table from one or more derived table, he is a virtual table, a database is stored in the definition of the view, the view corresponding data does not exist, the original data stored in the base table. So once the base table changes, the view will change. You can get the data items in the database to your interests view.
Create a view:
the CREATE VIEW view name [column names, column names ...] AS subquery;

Database Security:
related operations in the database before the blog summarizes past;
data integrity refers to the correctness and compatibility of data.
Relational database so that the integrity of control as its core support functions, which can provide a consistent database integrity for all users and applications. There are application integrity control achieved vulnerable, may be destroyed, it can not guarantee the integrity of the database.
Create foreign key constraints in two ways:
// Create a table test1
the CREATE TABLE test1 (int Primary Key cno, int SnO, Sex VARCHAR (10));
// Create a table test2 foreign key of the primary key cno reference test1 --- ------------ manner creating foreign keys. 1
the CREATE TABLE test2 (
SnO int,
CNO int,
Sex VARCHAR (10)
Age int,
Primary key (CNO, SnO),
a fOREIGN kEY (CNO) the REFERENCES test1 (CNO)
);
// create the primary key of embodiment 2, the use of integrity constraints keyword cONSTRAINT
the CREATE TABLE test2 {
SnO int
cONSTRAINT PKEY a pRIMARY kEY (SnO);
CNO int
cONSTRAINT FKEY a FOREIGN kEY (CNO) the REFERENCE test1 (CNO),
sex varchar(20) CONNSTRAINT SEX CHECK(sex IN(‘男’,‘女’)),
sname varchar(20) NOT NULL
};

Trigger:
Special process relationship table is driven by events. Also known as the trigger event - condition - action rules. When specific system events to a table of additions and deletions to change search operation takes place, the rules on the conditions to be checked, if the conditions are met, it would execute the action, or do not perform.
To create a trigger rule:
the CREATE TRIGGER <trigger name> {
the BEFORE | an AFTER} <triggering event> ON <table name>
the REFERENCE NEW | OLD ROW AS
<variable>
the FOR EACH ROW {|} a STATEMENT
[the WHEN <triggers>] <condition action>
example:
Create a trigger: when Grade SC modify attribute table, if the score increased by 10%, then the operation record to another table in SC_U, wherein oldGrade fraction is modified before, newGrade is modified score
CREATE tRIGGER SC_T aFTER UPDATE oF Grade ON SC // after triggering occasion, when the event is triggered, it will execute the following rules.
The REFERENCING
OLDROW the AS OldTuple
NEWROW the AS NewTuple
the FOR EACH // the ROW line level trigger, when a Grade update operation, the following rules will be executed once each time the
WHEN (NewTuple.Grade> = 1.1 * OldTuple.Grade ) // trigger condition execute the following statement is true when
INSERT iNTO SC_U (Sno, Cno, OldGrade, NewGrade)
Values (OldTuple.Sno, OldTuple.Cno, OldTuple.Grade, NewTuple.Grade);
In this example, the REFERENCING variable references noted, if the triggering event is an UPDATE FOR EACH ROW clause and has, if the reference variable has OLDROW and NEWROW, respectively tuple of tuples and modifications before modification, if not FOR eACH ROW clause, the variables can be referenced and oldtable NEWTABLE, oLDTABLE table represents the original content.

The number of students per table Student inserting operation into the recording table StudentLog increase in
the CREATE TRIGGER STUDENT_COUNT
the REFERENCING
NEW TABLE the AS the DELTA
the FOR EACH a STATEMENT
the INSERT the INTO StudentInsertLog (Numbers)
the FROM the DELTA the SELECT COUNT (*);
the FOR EACH a STATEMENT, indicates that the trigger event INSERT statements to perform an action once the trigger after the execution is complete, the trigger is a statement-level trigger.
Define a BEFORE row-level triggers, integrity rules defined for the table Teacher Teacher "professor salary of not less than 4,000 yuan, if less than 4000, change into 4000,"
the CREATE TRIGGER insert_or_update
BEFORE INSERT OR UPDATE ON Teacher
the REFERENCING NEW Row of As newTUPLE
EACH the ROW the FOR
the BEGIN
the IF (= newTuple.Job 'Professor') the AND (newTuple.Sal <4000)
THEN newTuple.Sal: = 4000;
the END the IF;
the END;
the DROP tRIGGER <trigger name> ON <table name>;

Database Normalization:
The basic idea is to gradually eliminate the dependency of standardization in inappropriate parts, each relational schema model to achieve a certain degree of "separation", that "thing to" model design principles. Normalization is actually a simplification of the concept.
Standardization process:
Reference article: https://blog.csdn.net/qq_41681241/article/details/95334431
database transaction
transaction database is user-defined sequence of operations, these operations are either all do or do not do the whole, an indivisible see work unit. Statement defines a transaction generally have three:
BEGIN TRANSATION; // begin a transaction
COMMIT; // commit the transaction
ROLLBACK; // rollback, the transaction has completed the withdrawal of all operational databases, back to the beginning of the state of affairs.
Transaction ACID properties of:
Atomicity, Consistency, continuity, isolation
type of transaction failure: internal transaction failure, media failure, system failure, the computer virus.
Database recovery techniques: the establishment of redundant data, recreate the databases by redundant data. The most common way: the registration log file data dump.
Static dump: No dump operation transaction is running in the system.
Dynamic dump: dump and transactions can be executed concurrently.
To ensure database recoverability, registration log file must ensure that the two rules: the order of registration in strict accordance with the order of events concurrent;
you must write the log file, write database;
concurrency control
transaction is the basic unit of concurrency control. The main technical concurrency control: the blockade, timestamp, optimistic Control Act, multi-version control.
An exclusive lock is called a write lock, when the transaction data object A T x plus lock only allows read and modify T A, A for any other transactions can not add any type of lock, release the lock on until T A .
Shared lock: also known as a read lock, transaction data object A plus T S lock, then the transaction T can be read but can not A A modification operations, other things can only S locks on A plus, plus can not write locks until the transaction T completed.
Locking protocol
blocking an agreement: A must for transaction T in the data plus X locks until the end of the transaction is not released, the end of the transaction, including normal and abnormal end before the end of the modified data.
In a blockade of the agreement, if only read data, not modify the data, does not require locked, so he can not guarantee repeatable read and do not read dirty data.
Two locking protocol: and the locking protocol on the basis of increased transaction T must be added its S lock before reading data R, S lock can be released after reading. In the two locking protocol, since the read data S lock is released, there is no guarantee repeatable read.
Three locking protocol: Add a transaction T on the basis of an agreement on the blockade before extending its S lock is not released until the end of the transaction before reading data R. Three locking protocol to prevent modification loss, read the dirty data can be further prevented, and repeatable read.

Livelock:
a plurality of transaction requests for the same data block operation.
When the blocked transaction T1 data R, this time the transaction T2 requests the data block to wait, T3 also request data block, when released after T1, T3 just get the lock, which is waiting to wait and T2 . When the lock is released T3, T4 get the lock ... In this case, let's T2 may wait forever. This is the live lock.
Ways to avoid livelock is first come, first serve. Each thing blockade data applications ranked teams, the transaction to acquire a lock before the request.
Deadlock:
Transaction data R1 blockade of T1, T2 R2 blockade data, and then, a blocking request R2 T1, T2 has been blocked as R2, T1 waits for T2 then release the lock on R2, then R1 T2 blocked and apply, because T1 has blocked R1, T2 T1 will have to wait to release the lock on R1, this situation, wait for T1 T2, T2 waits for T1, the case of two transactions never end, it is a deadlock.
Deadlock prevention:
a method of blocking
sequential method blocked

Deadlock diagnosis and lifting: Timeout law, waiting for the diagram.
Serializable scheduling:
multiple transactions concurrently executing is correct, if and only if the result is the same in accordance with the results of each of the serial execution order of these transactions, called this scheduling policy is serializable schedule.
Serializability concurrent transactions is correct scheduling criteria.
Conflict operation refers to different read and write operations to the same transaction and a data write operation.
Two-phase locking protocol:
Before any read and write data, first apply for and obtain the block data.
After the release of a blockade, the transaction no longer apply for and obtain any other block.
The first stage is to get the blockade, also known as the expansion phase.
The second stage is the release of blockade, also known as contraction phase.
Multi-granularity locking: locking means for a node to this node and all the nodes were under lock.
Intention locks: if a node to add intent locks, described in which all nodes are being locked.
There are locks on the extended IS intent locks, IX lock, SIX lock.

Guess you like

Origin blog.csdn.net/qq_41681241/article/details/94655778