SQL batch add data

      自上周完成任务后还没安排新的任务,烦(qie)躁(xi)。
      今天打算安排两篇博客,这是第一篇,另一个是由这个再做的索引案例

One: Preliminary preparation
1. The previous database is used, and the CopySQLTableData table and HierarchyDepartment table are used:
Insert picture description here
2. I need to add the table CopySQLTableData in batches

Two: Practice

1. First test and add 9 new ones. As far as the execution speed is concerned, it feels like 3s has passed, which is 3 in 1 second. (This ratio can be understood in detail). Here we have also opened two buttons, let’s look at the specific situation later.
Our lovely army of Qizai is ready to attack, plus we who are always 18Insert picture description here

Insert picture description here

--批量数据
--循环添加数据
declare @i int --声明一个变量
set @i=1     --给变量赋值(初始化)
while @i<10   --循环插入
begin
	insert into CopySQLTableData ([Name],Sex,Age,CreationTime,Remark)
	values ('长江'+convert(varchar,@i)+'号',0,18,GETDATE(),'SQL批量新增-第一轮');
set @i=@i+1
end

GO

2. First look at the execution plan. Here, the 9 insert statements share the overhead, which is easy to understand.
Insert picture description here
3. We mainly look at the I/O and CPU overhead.
Insert picture description here
4. Then look at another client statistics. What we need to know and understand easily are these three, others are meaningless, and some don’t understand..., 18 is 9 of insert plus the value of @i; 9 is the effect of insert; I guess the 19 below is 18+9+@i initialization; the last 3 should be a three-way handshake (please correct me if there is an error! [一本经.jpg])
In addition, the header of this query test 1 indicates the data of the first execution. If the code is executed again, there will be query test 2, and there will be comparisons on the data. Let’s see below
Insert picture description here
5. After the gossip is finished,
Insert picture description here
let’s take a look at the results. Three: After the test, perform a formal mass cycle insert. This time we add 100 In case
Insert picture description here
1. After 42 minutes of execution, I still can’t wait...I’m so
Insert picture description here
tired of this execution. The CPU ratio is 16-17 (I7-9750H, 6 cores and 12 threads). After stopping, only 0.1-0.3 Proportion... The practice of this amount of data is usually not to do it casually . (During the period I switched to the database, there was a ten-second freeze)
Let us see how much data was executed this time.

…It’s just 1.5k? ? ? Is there any mistake? (The average speed is 2 per second...[泪奔.gif])

2. Look for the problem. Look
at the client that
Insert picture description here
is executed. The blue one is executing the COUNT function, and the red one is the new data we have added, or is it so I can’t believe it? The I/O efficiency is too low, so I have to arrange a High-efficiency approach
3.
Insert picture description here
Look at the time to add the data. You can see that the speed at the beginning is still OK. 10 or so data are inserted per second, but 18 seconds is less
Insert picture description here
so that the second minute in the later period is not as good as the previous 2 The number of inserts per second shouldn’t be the case in theory.
4. Is the database restricted by anything? ?
Insert picture description here
Insert picture description here
I don’t know if it’s the problem. Some big guys are passing by to help. [Please, please] [Please, please]

4. Finding a new method
To be continued... The unexpected problem, I have to solve it, and I added that
the blog indexed by [狗头保命] is also delayed until millions of data are officially added before writing.

Refueling ヾ (◍ ° ∇ ° ◍) ノ

Insert picture description here

这是一个失败的方法,但是一篇成功的博客,让我认识到了理论与实践的差距
实践是检验真理的唯一标准  !!!!

Guess you like

Origin blog.csdn.net/qq_44471040/article/details/108706169