Does the database auto-increment the primary key?

1 Should every table have an auto-incrementing primary key?

Not necessarily
self-incrementing the primary key can speed up the insertion speed of the row, it has advantages in the space utilization of the table, and the fragmentation is not obvious.

However, for some content, such as the query based on uid is very frequent and relatively concentrated, if you do not use auto-incrementing the primary key, but use uid+id as the composite primary key, the query efficiency will increase, but the insertion and fragmentation will increase. . But if the storage type of the database is ssd, then this problem does not exist.

Therefore, in most cases, it is correct that the table has an auto-incrementing primary key.

2 Is the self-incrementing primary key unique in business?

uncertain

Under single table structure, yes.

In the case of multiple tables, not necessarily, certain strategies are required, such as setting different suffixes, the same interval, etc.

Does the database auto-increment the primary key?  Does the database auto-increment the primary key?

3 Can self-incrementing primary keys be involved in business?

This is not recommended.

For example, a table can have an auto-incrementing primary key, and the table is unique. When querying and updating based on id, the operation can be simplified. But in general, when there is a relationship with the business and uniqueness is required, it should be maintained by the business itself, such as the use of formats or algorithms, hash generation, etc.

4 How to maintain the uniqueness of the primary key for business maintenance in the case of multiple tables?

Maintain the auto-increment key interval segment, the server takes one segment each time, and the optimistic lock is updated. This requires an additional table or strategy to maintain this field.

Based on algorithm A, a fixed time prefix, such as: yyyyMMddHHmmss + table number mod value + random number, reduces the possibility of conflict by increasing the number of bits. The table field has a unique constraint (but sometimes this constraint is not reliable). If a duplicate field value exception is thrown when inserting, the insert will be regenerated.

Based on algorithm B, fixed time prefix, such as: yyyyMMddHHmmss + fixed number of collision self-increment N + random number. There is no need to increase the number of bits to reduce the possibility of collisions. When insert throws a duplicate field value exception, N++, re-insert until there are no more conflicts. After that, N is fixed as the infix, and N is cached in the server, and this infix will continue to be used after restarting. If repeated exceptions occur, N++ can perform the same operation again. The mod value of N does not need to be mentioned deliberately.

Based on the infix management, that is, reporting the infix to the central server, it can be understood that the id relationship of the server is cached somewhere, and the infix is ​​dynamically allocated.

There are many other methods, and I have not used them, so I won't go into details.
Algorithm B, simple, less communication, and limited number of collisions. Algorithm A, there is an infinite number of collisions, albeit a very, very low percentage. However, in the case of high concurrency, algorithm B will be more violent than algorithm A during initialization.

Interval segment and infix management both introduce the concept of a central node, which is highly dependent, but relatively reliable, and is a more common implementation method in the industry.

Address of this article:  https://www.linuxprobe.com/database-question.html


Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324621848&siteId=291194637