How to process 430 million records per day in SQL Server (database big data processing)

First of all, I am just a programmer, not a professional DBA. The following article is written from the process of solving a problem, rather than giving you a correct result from the beginning. If there is something wrong in the article, please Database Daniel gives corrections so that I can better handle this business.

Background of the project

This is a project for a certain data center. The difficulty of the project is horrific. This project really makes me feel that the shopping mall is like a battlefield, and I am just a soldier in it. There are too many tactics and too many high-level managers. There is too much insider in the contest between the two. For the specific situation of this project, I will write a related blog post when I have time.

This project requires environmental monitoring. For the time being, we call the monitored equipment as acquisition equipment, and the attributes of the acquisition equipment as monitoring indicators. Project requirements: The system supports no less than 10w monitoring indicators, the data update of each monitoring indicator is no more than 20 seconds , and the storage delay is no more than 120 seconds . Then, we can get the ideal state through simple calculation - the data to be stored is: 30w per minute, 1800w per hour, that is, 432 million per day . In reality, the amount of data will be about 5% larger than this. (In fact, most of them are information garbage, which can be processed by data compression, but others just want to mess with you, what can you do)

The above are the indicators required by the project. I think many students who have a lot of experience in big data processing will sneer at it, just like that? Well, I have also read a lot of big data processing stuff, but I haven't dealt with it before. It seems that it is really easy to solve based on how others are well-informed, what distribution, and what separation of read and write. However, the problem is not so simple. As I said above, this is a very bad project, a typical project of vicious competition in the industry.

  1. There are no more servers, but in addition to the database and centralized collector (that is, data analysis, alarm, and stored programs), this server also supports a 30w-point northbound interface (SNMP). Before the program is not optimized, the CPU occupies 80 %above. Because the project requires the use of dual-machine hot backup, in order to save trouble and reduce unnecessary trouble, we put related services together so that we can make full use of the features of HA (externally purchased HA system)
  2. The correctness of system data is extremely abnormal. It requires that from the bottom acquisition system to the top monitoring system, no single piece of data can be bad
    . Our system architecture is as follows. It can be seen that the database pressure is very large, especially in the LevelA node:
    How to process 430 million records per day in SQL Server (database big data processing) - Close your eyes and miss you - away.
  3. The hardware configuration is as follows:
    CPU : Intel® Xeon® processor E5-2609 (4 cores, 2.40GHz, 10MB, 6.4 GT/s)
    Memory : 4GB (2x2GB) DDR3 RDIMM Memory, 1333MHz, ECC
    HDD : 500GB 7200 RPM 3.5' ' SATA3 hard drive, Raid5.
  4. The database version
    uses SQL Server 2012 Standard Edition, the genuine software provided by HP, which lacks many NB functions of the Enterprise Edition.

write bottleneck

The first stumbling block we encountered was that we found that under the existing program, SQL Server could not handle such a large amount of data at all. What is the specific situation?

Our storage structure

Generally, in order to store a large amount of historical data, we will carry out a physical sub-table, otherwise there are millions of records per day, and hundreds of millions of records in a year. So it turns out that our table structure looks like this:

CREATE TABLE [dbo].[His20140822](
	[No] [bigint] IDENTITY(1,1) NOT NULL,
	[Dtime] [datetime] NOT NULL,
	[MgrObjId] [varchar](36) NOT NULL,
	[Id] [varchar](50) NOT NULL,
	[Value] [varchar](50) NOT NULL,
 CONSTRAINT [PK_His20140822] PRIMARY KEY CLUSTERED 
(
	[No] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

No is used as a unique identifier, collection device Id (Guid), monitoring indicator Id (varchar(50)), recording time, and recording value. The collection device ID and monitoring indicator ID are used as indexes for quick search.

batch write

At that time, BulKCopy was used for writing. Yes, it is it. It claims to write millions of records in seconds.

    public static int BatchInert(string connectionString, string desTable, DataTable dt, int batchSize = 500)
    {
        using (var sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.UseInternalTransaction)
        {
            BulkCopyTimeout = 300,
            NotifyAfter = dt.Rows.Count,
            BatchSize = batchSize,
            DestinationTableName = desTable
        })
        {
            foreach (DataColumn column in dt.Columns)
                sbc.ColumnMappings.Add(column.ColumnName, column.ColumnName);
            sbc.WriteToServer(dt);
        }

        return dt.Rows.Count;
    }

What's the problem?

The above architecture is OK with 40 million data per day. However, when the configuration is adjusted to the above background, the centralized monitoring program overflows the memory. The analysis shows that too much data received is stored in the memory, but there is no time to write it into the database, which eventually leads to the generated data. The data is larger than the consumption, resulting in memory overflow and the program cannot work.

Where is the bottleneck?

Is it because of a RAID disk issue? Is it a data structure problem? Is it a hardware problem? Is the SQLServer version the problem? Is there a problem with no partition table? Or is it a program problem?

当时时间只有一个星期,一个星期搞不好,项目监管就要我们滚蛋了,于是,有了连续工作48小时的壮举,有了到处打电话求人的抓鸡……

但是,这个时候需要的是冷静,再冷静……SQLServer版本?硬件?目前都不大可能换的。RAID磁盘阵列,应该不是。那么到底是什么,真TM的冷静不下来。

大家可能体会不到现场那种紧张的气氛,其实过了这么久,我自己也都很难再回到那种情境。但是可以这么说,或许我们现在有了各种方法,或者处于局外人我们有更多思考,但是当一个项目压迫你快到放弃的时候,你那时的想法、考虑在现场环境因素的制约下,都可能出现重大的偏差。有可能让你快速的思考,也有可能思维停滞。有些同事在这种高压的环境下,甚至出现了更多的低级错误,思维已经完全乱了,效率更低了……36小时没有合眼,或者只在工地上(下雨天到处都是泥巴,干了的话到时都是泥灰)眯两三个小时,然后继续干,连续这么一个星期!或者还要继续!

很多人给了很多想法,但是好像有用,又好像没用。等等,为什么是“好像有用,又好像没用”?我隐隐约约中,好像抓住了一丝方向,到底是什么?对了,验证,我们现在是跑在现场环境下,之前没有问题,不代表现在的压力下没有问题,要在一个大型系统中分析这么个小功能,影响太大了,我们应该分解它。是的,是“单元测试”,就是单个方法的测试,我们需要验证每个函数,每个独立的步骤到底耗时在哪里?

逐步测试验证系统瓶颈

修改BulkCopy的参数
首先,我想到的是,修噶BulkCopy的各项参数,BulkCopyTimeoutBatchSize,不断的测试调整,结果总是在某个范围波动,实际并没有影响。或许会影响一些CPU计数,但是远远没有达到我的期望,写入的速度还是在5秒1w~2w波动,远远达不到要求20秒内要写20w的记录。

按采集设备存储
是的,上述结构按每个指标每个值为一条记录,是不是太多的浪费?那么按采集设备+采集时间作为一条记录是否可行?问题是,怎么解决不同采集设备属性不一样的问题?这时,一个同事发挥才能了,监控指标+监控值可以按XML格式存储。哇,还能这样?查询呢,可以用for XML这种形式。

于是有了这种结构:No、MgrObjId、Dtime、XMLData

结果验证,比上面的稍微好点,但是不是太明显。

数据表分区???
那个时候还没有学会这个技能,看了下网上的文章,好像挺复杂的,时间不多了,不敢尝试。

停止其他程序
我知道这个肯定是不行的,因为软件、硬件的架构暂时没法修改。但是我希望验证是不是这些因素影响的。结果发现,提示确实明显,但是还是没有达到要求。

难道是SQLServer的瓶颈?
没辙了,难道这就是SQLServer的瓶颈?上网查了下相关的资料,可能是IO的瓶颈,尼玛,还能怎么办,要升级服务器,要更换数据库了吗,但是,项目方给吗?

等等,好像还有个东西,索引,对索引!索引的存在会影响插入、更新

去掉索引

是的,去掉索引之后查询肯定慢,但是我必须先验证去掉索引是否会加快写入。如果果断把MgrObjId和Id两个字段的索引去掉。

运行,奇迹出现了,每次写入10w条记录,在7~9秒内完全可以写入,这样就达到了系统的要求。

查询怎么解决?

一个表一天要4亿多的记录,这是不可能查询的,在没有索引的情况下。怎么办!?我又想到了我们的老办法,物理分表。是的,原来我们按天分表,那么我们现在按小时分表。那么24个表,每个表只需存储1800w条记录左右。

然后查询,一个属性在一个小时或者几个小时的历史记录。结果是:慢!慢!!慢!!!去掉索引的情况下查询1000多万的记录根本是不可想象的。还能怎么办?

继续分表,我想到了,我们还可以按底层的采集器继续分表,因为采集设备在不同的采集器中是不同的,那么我们查询历史曲线时,只有查单个指标的历史曲线,那么这样就可以分散在不同的表中了。

说干就干,结果,通过按10个采集嵌入式并按24小时分表,每天生成240张表(历史表名类似这样:His_001_2014112615),终于把一天写入4亿多条记录并支持简单的查询这个问题给解决掉了!!!

查询优化

在上述问题解决之后,这个项目的难点已经解决了一半,项目监管也不好意思过来找茬,不知道是出于什么样的战术安排吧。

过了很长一段时间,到现在快年底了,问题又来了,就是要拖死你让你在年底不能验收其他项目。

这次要求是这样的:因为上述是模拟10w个监控指标,而现在实际上线了,却只有5w个左右的设备。那么这个明显是不能达到标书要求的,不能验收。那么怎么办呢?这些聪明的人就想,既然监控指标减半,那么我们把时间也减半,不就达到了吗:就是说按现在5w的设备,那你要10s之内入库存储。我勒个去啊,按你这个逻辑,我们如果只有500个监控指标,岂不是要在0.1秒内入库?你不考虑下那些受监控设备的感想吗?

但是别人要玩你,你能怎么办?接招呗。结果把时间降到10秒之后,问题来了,大家仔细分析上面逻辑可以知道,分表是按采集器分的,现在采集器减少,但是数量增加了,发生什么事情呢,写入可以支持,但是,每张表的记录接近了400w,有些采集设备监控指标多的,要接近600w,怎么破?

于是技术相关人员开会讨论相关的举措。

在不加索引的情况下怎么优化查询?

有同事提出了,where子句的顺序,会影响查询的结果,因为按你刷选之后的结果再处理,可以先刷选出一部分数据,然后继续进行下一个条件的过滤。听起来好像很有道理,但是SQLServer查询分析器不会自动优化吗?原谅我是个小白,我也是感觉而已,感觉应该跟VS的编译器一样,应该会自动优化吧。

具体怎样,还是要用事实来说话:

结果同事修改了客户端之后,测试反馈,有较大的改善。我查看了代码:
How to process 430 million records per day in SQL Server (database big data processing) - Close your eyes and miss you - away.

难道真的有这么大的影响?等等,是不是忘记清空缓存,造成了假象?
于是让同事执行下述语句以便得出更多的信息:

--优化之前
DBCC FREEPROCCACHE
DBCC DROPCLEANBUFFERS

SET STATISTICS IO ON
select Dtime,Value from dbo.his20140825 WHERE  Dtime>='' AND Dtime<='' AND MgrObjId='' AND Id=''
SET STATISTICS IO OFF

--优化之后
DBCC FREEPROCCACHE
DBCC DROPCLEANBUFFERS

SET STATISTICS IO ON
select Dtime,Value from dbo.his20140825 WHERE MgrObjId='' AND Id='' AND Dtime>='' AND Dtime<=''
SET STATISTICS IO OFF

结果如下:
How to process 430 million records per day in SQL Server (database big data processing) - Close your eyes and miss you - away.

优化之前反而更好了?

仔细查看IO数据,发现,预读是一样的,就是说我们要查询的数据记录都是一致的,物理读、表扫描也是一直的。而逻辑读取稍有区别,应该是缓存命中数导致的。也就是说,在不建立索引的情况下,where子句的条件顺序,对查询结果优化作用不明显

那么,就只能通过索引的办法了。

建立索引的尝试

建立索引不是简单的事情,是需要了解一些基本的知识的,在这个过程中,我走了不少弯路,最终才把索引建立起来。

下面的实验基于以下记录总数做的验证:
How to process 430 million records per day in SQL Server (database big data processing) - Close your eyes and miss you - away.

按单个字段建立索引
这个想法,主要是受我建立数据结构影响的,我内存中的数据结构为:Dictionary<MgrObjId,Dictionary<Id,Property>>。我以为先建立MgrObjId的索引,再建立Id的索引,SQLServer查询时,就会更快。
How to process 430 million records per day in SQL Server (database big data processing) - Close your eyes and miss you - away.

先按MgrObjId建立索引,索引大小为550M,耗时5分25秒。结果,如上图的预估计划一样,根本没有起作用,反而更慢了。

按多个条件建立索引
OK,既然上面的不行,那么我们按多个条件建立索引又如何?CREATE NONCLUSTERED INDEX Idx_His20141008 ON dbo.his20141008(MgrObjId,Id,Dtime)

结果,查询速度确实提高了一倍:
How to process 430 million records per day in SQL Server (database big data processing) - Close your eyes and miss you - away.

等等,难道这就是索引的好处?花费7分25秒,用1.1G的空间换取来的就是这些?肯定是有什么地方不对了,于是开始翻查资料,查看一些相关书籍,最终,有了加大的进展。

正确的建立索引

首先,我们需要明白几个索引的要点:

  • 索引之后,按索引字段重复最少的来排序,会达到最优的效果。以我们的表来说,如果建立了No的聚集索引,把No放在where子句的第一位是最佳的,其次是Id,然后是MgrObjId,最后是时间,时间索引如果表是一个小时的,最好不要用
  • where子句的顺序决定了查询分析器是否使用索引来查询。比如建立了MgrObjId和Id的索引,那么where MgrObjId='' and Id='' and Dtime=''就会采用索引查找,而where Dtime='' and MgrObjId='' and Id=''则不一定会采用索引查找。
  • 把非索引列的结果列放在包含列中。因为我们条件是MgrObjId和Id以及Dtime,因此返回结果中只需包含Dtime和Value即可,因此把Dtime和Value放在包含列中,返回的索引结果就有这个值,不用再查物理表,可以达到最优的速度。

跟上述几点原则,我们建立以下的索引:CREATE NONCLUSTERED INDEX Idx_His20141008 ON dbo.his20141008(MgrObjId,Id) INCLUDE(Value,Dtime)

耗费时间为:6分多钟,索引大小为903M。

我们看看预估计划:
How to process 430 million records per day in SQL Server (database big data processing) - Close your eyes and miss you - away.

可以看到,这里完全使用了索引,没有额外的消耗。而实际执行的结果,1秒都不到,竟然不用一秒就在1100w的记录中把结果筛选了出来!!帅呆了!!

怎么应用索引?

既然写入完成了、读取完成了,怎么结合呢?我们可以把一个小时之前的数据建立索引,当前一个小时的数据就不建立索引。也就是,不要再创建表的时候建立索引!!

还能怎么优化

可以尝试读写分离,写两个库,一个是实时库,一个是只读库。一个小时内的数据查询实时库,一个小时之前的数据查询只读库;只读库定时存储,然后建立索引;超过一个星期的数据,进行分析处理再存储。这样,无论查询什么时间段的数据,都能够正确处理了——一个小时之内的查询实时库,一个小时到一个星期内的查询只读库,一个星期之前的查询报表库。

如果不需要物理分表,则在只读库中,定时重建索引即可。

总结

如何在SQLServer中处理亿万级别的数据(历史数据),可以按以下方面进行:

  • 去掉表的所有索引
  • 用SqlBulkCopy进行插入
  • 分表或者分区,减少每个表的数据总量
  • 在某个表完全写完之后再建立索引
  • 正确的指定索引字段
  • 把需要用到的字段放到包含索引中(在返回的索引中就包含了一切)
  • 查询的时候只返回所需的字段

Original address: https://blog.csdn.net/wybshyy/article/details/52064289

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325419449&siteId=291194637