SQL Server JSON:性能手册

目录

 介绍

1. 数据类型

2. 存储

3. 压缩/解压

4. 压缩

5. 列存储

6. 创建JSON

7.检查JSON

8. JSONValue

9. OpenJson

10. 字符串拆分

11. Lax & Strict

12.修改

13.转换隐式

14. 索引

15. 解析器性能

结论

Git


SQL Server 2017/2019中,微软显著提高了已经很快的JSON解析器的性能。我们将在本文中了解更多相关信息。

 介绍

当微软多年来从一个极端到另一个极端以奇怪的新奇事物让每个人都感到震惊时,你会一点一点地习惯它,并以一定程度的怀疑态度等待这些更新。随着时间的推移,这种感觉只会在潜意识中变得越来越强烈,你不会期望有什么好事发生。

然而,有时一切都完全相反。出乎意料的是,微软发布了一些完美的功能,破坏了所有现有的刻板印象。起初,您会期待同样的耙子和替代性尴尬的尖锐攻击,但每分钟您都意识到它是一个正常工作的SQL Server JSON解析器,您多年来一直缺少它。

如此华丽的介绍是有一定根据的,因为长期以来在Microsoft Connect上,支持在SQL Server上使用JSON是最需要的功能之一。几年过去了,没想到这个功能是随着SQL Server 2016的发布一起实现的。已经是快速的JSON解析器。

1. 数据类型

SQL Server上的JSON支持本机适用于所有版本。同时,Microsoft没有像XML那样提供单独的数据类型。SQL Server上的JSON数据以纯文本形式存储:采用Unicode(NVARCHAR/NCHAR)ANSI (VARCHAR/CHAR)格式。

DECLARE @JSON_ANSI VARCHAR(MAX) =      '[{"Nąme":"Lenōvo モデ460"}]'
      , @JSON_Unicode NVARCHAR(MAX) = N'[{"Nąme":"Lenōvo モデ460"}]'

SELECT DATALENGTH(@JSON_ANSI), @JSON_ANSI
UNION ALL
SELECT DATALENGTH(@JSON_Unicode), @JSON_Unicode

要记住的主要事情是这个或那个数据类型需要多少空间(每个字符2个字节,如果我们将数据存储为Unicode,或者1个字节用于ANSI字符串)。另外,不要忘记将'N'文字放在Unicode常量之前。否则,您可能会遇到一堆有趣的情况:

--- ----------------------------
25  [{"Name":"Lenovo ??460"}]
50  [{"Nąme":"Lenōvo モデ460"}]

一切看似简单,实则不然。此外,我们将看到所选数据类型不仅会影响大小,还会影响解析速度。

此外,Microsoft强烈建议不要使用已弃用的数据类型——NTEXT/TEXT。对于那些习惯性地还在使用它们的人,我们将做一个小调查实验:

DROP TABLE IF EXISTS #varchar
DROP TABLE IF EXISTS #nvarchar
DROP TABLE IF EXISTS #ntext
GO

CREATE TABLE #varchar  (x VARCHAR(MAX))
CREATE TABLE #nvarchar (x NVARCHAR(MAX))
CREATE TABLE #ntext    (x NTEXT)
GO

DECLARE @json NVARCHAR(MAX) =
    N'[{"Manufacturer":"Lenovo","Model":"ThinkPad E460","Availability":1}]'

SET STATISTICS IO, TIME ON

INSERT INTO #varchar
SELECT TOP(50000) @json
FROM [master].dbo.spt_values s1
CROSS JOIN [master].dbo.spt_values s2
OPTION(MAXDOP 1)

INSERT INTO #nvarchar
SELECT TOP(50000) @json
FROM [master].dbo.spt_values s1
CROSS JOIN [master].dbo.spt_values s2
OPTION(MAXDOP 1)

INSERT INTO #ntext
SELECT TOP(50000) @json
FROM [master].dbo.spt_values s1
CROSS JOIN [master].dbo.spt_values s2
OPTION(MAXDOP 1)

SET STATISTICS IO, TIME OFF

后一种情况下的插入性能会显著不同:

varchar:  CPU time = 32 ms,  elapsed time = 28 ms
nvarchar: CPU time = 31 ms,  elapsed time = 30 ms
ntext:    CPU time = 172 ms, elapsed time = 190 ms

另外,请记住NTEXT/TEXT始终存储在LOB页面上:

SELECT obj_name = OBJECT_NAME(p.[object_id])
     , a.[type_desc]
     , a.total_pages
     , total_mb = a.total_pages * 8 / 1024.
FROM sys.allocation_units a
JOIN sys.partitions p ON p.[partition_id] = a.container_id
WHERE p.[object_id] IN (
        OBJECT_ID('#nvarchar'),
        OBJECT_ID('#ntext'),
        OBJECT_ID('#varchar')
    )

obj_name      type_desc      total_pages  total_mb
------------- -------------- ------------ -----------
varchar       IN_ROW_DATA    516          4.031250
varchar       LOB_DATA       0            0.000000
nvarchar      IN_ROW_DATA    932          7.281250
nvarchar      LOB_DATA       0            0.000000
ntext         IN_ROW_DATA    188          1.468750
ntext         LOB_DATA       1668         13.031250

仅供参考,从SQL Server 2005开始,可变长度类型的存储数据的页面规则已更改。一般情况下,如果大小超过8060字节,则数据放在LOB页上,否则放在IN_ROW页中。很明显,在这种情况下,SQL Server优化了页面中的数据存储。

不使用NTEXT/TEXT的最后一个原因是所有JSON函数都与弃用的数据类型不友好:

SELECT TOP(1) 1
FROM #ntext
WHERE ISJSON(x) = 1
Msg 8116, Level 16, State 1, Line 63
Argument data type ntext is invalid for argument 1 of isjson function.

2. 存储

现在让我们看看将JSON存储为NVARCHAR/VARCHAR与以XML形式呈现的类似数据相比有何优势。此外,我们将尝试以本机格式存储XML,并将其表示为string

DECLARE @XML_Unicode NVARCHAR(MAX) = N'
<Manufacturer Name="Lenovo">
  <Product Name="ThinkPad E460">
    <Model Name="20ETS03100">
      <CPU>i7-6500U</CPU>
      <Memory>16</Memory>
      <SSD>256</SSD>
    </Model>
    <Model Name="20ETS02W00">
      <CPU>i5-6200U</CPU>
      <Memory>8</Memory>
      <HDD>1000</HDD>
    </Model>
    <Model Name="20ETS02V00">
      <CPU>i5-6200U</CPU>
      <Memory>4</Memory>
      <HDD>500</HDD>
    </Model>
  </Product>
</Manufacturer>'

DECLARE @JSON_Unicode NVARCHAR(MAX) = N'
[
  {
    "Manufacturer": {
      "Name": "Lenovo",
      "Product": {
        "Name": "ThinkPad E460",
        "Model": [
          {
            "Name": "20ETS03100",
            "CPU": "Intel Core i7-6500U",
            "Memory": 16,
            "SSD": "256"
          },
          {
            "Name": "20ETS02W00",
            "CPU": "Intel Core i5-6200U",
            "Memory": 8,
            "HDD": "1000"
          },
          {
            "Name": "20ETS02V00",
            "CPU": "Intel Core i5-6200U",
            "Memory": 4,
            "HDD": "500"
          }
        ]
      }
    }
  }
]'

DECLARE @XML_Unicode_D NVARCHAR(MAX) = N'<Manufacturer Name="Lenovo">
        <Product Name="ThinkPad E460"><Model Name="20ETS03100"><CPU>i7-6500U</CPU>
        <Memory>16</Memory><SSD>256</SSD></Model><Model Name="20ETS02W00">
        <CPU>i5-6200U</CPU><Memory>8</Memory><HDD>1000</HDD></Model>
        <Model Name="20ETS02V00"><CPU>i5-6200U</CPU><Memory>4</Memory>
        <HDD>500</HDD></Model></Product></Manufacturer>'
      , @JSON_Unicode_D NVARCHAR(MAX) = N'[{"Manufacturer":{"Name":"Lenovo","Product":
        {"Name":"ThinkPad E460","Model":[{"Name":"20ETS03100","CPU":"Intel Core i7-6500U",
        "Memory":16,"SSD":"256"},{"Name":"20ETS02W00","CPU":"Intel Core i5-6200U",
        "Memory":8,"HDD":"1000"},{"Name":"20ETS02V00","CPU":"Intel Core i5-6200U",
        "Memory":4,"HDD":"500"}]}}}]'

DECLARE @XML XML = @XML_Unicode
      , @XML_ANSI VARCHAR(MAX) = @XML_Unicode
      , @XML_D XML = @XML_Unicode_D
      , @XML_ANSI_D VARCHAR(MAX) = @XML_Unicode_D
      , @JSON_ANSI VARCHAR(MAX) = @JSON_Unicode
      , @JSON_ANSI_D VARCHAR(MAX) = @JSON_Unicode_D

SELECT *
FROM (
    VALUES ('XML Unicode', DATALENGTH(@XML_Unicode), DATALENGTH(@XML_Unicode_D))
         , ('XML ANSI', DATALENGTH(@XML_ANSI), DATALENGTH(@XML_ANSI_D))
         , ('XML', DATALENGTH(@XML), DATALENGTH(@XML_D))
         , ('JSON Unicode', DATALENGTH(@JSON_Unicode), DATALENGTH(@JSON_Unicode_D))
         , ('JSON ANSI', DATALENGTH(@JSON_ANSI), DATALENGTH(@JSON_ANSI_D))
) t(DataType, Delimeters, NoDelimeters)

执行后,我们得到以下结果:

DataType     Delimeters  NoDelimeters
------------ ----------- --------------
XML Unicode  914         674
XML ANSI     457         337
XML          398         398
JSON Unicode 1274        604
JSON ANSI    637         302

看起来最好的选择是原生XML。这在一定程度上是正确的,但也有细微差别。XML始终存储为Unicode。此外,由于SQL Server使用二进制格式来存储这些数据,因此所有内容都被压缩为一种带有指针的标准化字典。这就是为什么在XML中格式化不会影响数据的最终大小。

字符串是不同的,所以我不建议存储格式化的JSON。最好的选择是在客户端按需保存和格式化数据时删除所有额外的字符。

如果您想进一步减小JSON数据的大小,我们有多种选择可供选择。

3. 压缩/解压

SQL服务器2016采用了增加对GZIP压缩支持新的COMPRESS/DECOMPRESS函数:

SELECT *
FROM (
    VALUES ('XML Unicode', DATALENGTH(COMPRESS(@XML_Unicode)),
                           DATALENGTH(COMPRESS(@XML_Unicode_D)))
         , ('XML ANSI', DATALENGTH(COMPRESS(@XML_ANSI)),
                        DATALENGTH(COMPRESS(@XML_ANSI_D)))
         , ('JSON Unicode', DATALENGTH(COMPRESS(@JSON_Unicode)),
                            DATALENGTH(COMPRESS(@JSON_Unicode_D)))
         , ('JSON ANSI', DATALENGTH(COMPRESS(@JSON_ANSI)),
                         DATALENGTH(COMPRESS(@JSON_ANSI_D)))
) t(DataType, CompressDelimeters, CompressNoDelimeters)

上一个示例的结果:

DataType     CompressDelimeters   CompressNoDelimeters
------------ -------------------- --------------------
XML Unicode  244                  223
XML ANSI     198                  180
JSON Unicode 272                  224
JSON ANSI    221                  183

一切都很好地收缩,但您需要记住一个功能。假设最初数据是ANSI格式的,然后变量的类型更改为Unicode

DECLARE @t TABLE (val VARBINARY(MAX))

INSERT INTO @t
VALUES (COMPRESS('[{"Name":"ThinkPad E460"}]')) -- VARCHAR(8000)
     , (COMPRESS(N'[{"Name":"ThinkPad E460"}]')) -- NVARCHAR(4000)

SELECT val
     , DECOMPRESS(val)
     , CAST(DECOMPRESS(val) AS NVARCHAR(MAX))
     , CAST(DECOMPRESS(val) AS VARCHAR(MAX))
FROM @t

COMPRESS函数为ANSI/Unicode返回不同的二进制序列,在后续读取时,我们将面临一些数据以ANSI格式存储,而一些数据以Unicode格式存储的情况。以后很难猜测要转换为哪种类型:

---------------------------- -------------------------------------------------------
筛丢浡≥∺桔湩偫摡䔠㘴∰嵽      [{"Name":"ThinkPad E460"}]
[{"Name":"ThinkPad E460"}]   [ { " N a m e " : " T h i n k P a d   E 4 6 0 " } ]

如果我们要构建一个高负载的系统,那么使用该COMPRESS函数会减慢插入的速度:

USE tempdb
GO

DROP TABLE IF EXISTS #Compress
DROP TABLE IF EXISTS #NoCompress
GO

CREATE TABLE #NoCompress (DatabaseLogID INT PRIMARY KEY, JSON_Val NVARCHAR(MAX))
CREATE TABLE #Compress   (DatabaseLogID INT PRIMARY KEY, JSON_CompressVal VARBINARY(MAX))
GO

SET STATISTICS IO, TIME ON

INSERT INTO #NoCompress
SELECT DatabaseLogID
     , JSON_Val = (
            SELECT PostTime, DatabaseUser, [Event], [Schema], [Object], [TSQL]
            FOR JSON PATH, WITHOUT_ARRAY_WRAPPER
        )
FROM AdventureWorks2014.dbo.DatabaseLog
OPTION(MAXDOP 1)

INSERT INTO #Compress
SELECT DatabaseLogID
     , JSON_CompressVal = COMPRESS((
            SELECT PostTime, DatabaseUser, [Event], [Schema], [Object], [TSQL]
            FOR JSON PATH, WITHOUT_ARRAY_WRAPPER
         ))
FROM AdventureWorks2014.dbo.DatabaseLog
OPTION(MAXDOP 1)

SET STATISTICS IO, TIME OFF

此外,这非常重要:

NoCompress: CPU time = 15 ms,  elapsed time = 25 ms
Compress:   CPU time = 218 ms, elapsed time = 280 ms

这将减少表的大小:

SELECT obj_name = OBJECT_NAME(p.[object_id])
     , a.[type_desc]
     , a.total_pages
     , total_mb = a.total_pages * 8 / 1024.
FROM sys.partitions p
JOIN sys.allocation_units a ON p.[partition_id] = a.container_id
WHERE p.[object_id] IN (
        OBJECT_ID('#Compress'),
        OBJECT_ID('#NoCompress')
    )
obj_name       type_desc     total_pages  total_mb
-------------- ------------- ------------ ---------
NoCompress     IN_ROW_DATA   204          1.593750
NoCompress     LOB_DATA      26           0.203125
Compress       IN_ROW_DATA   92           0.718750
Compress       LOB_DATA      0            0.000000

此外,从压缩数据表中读取的速度会被DECOMPRESS函数大大减慢:

SET STATISTICS IO, TIME ON

SELECT *
FROM #NoCompress
WHERE JSON_VALUE(JSON_Val, '$.Event') = 'CREATE_TABLE'

SELECT DatabaseLogID, [JSON] = CAST(DECOMPRESS(JSON_CompressVal) AS NVARCHAR(MAX))
FROM #Compress
WHERE JSON_VALUE(CAST(DECOMPRESS(JSON_CompressVal) AS NVARCHAR(MAX)), '$.Event') =
    N'CREATE_TABLE'

SET STATISTICS IO, TIME OFF

逻辑读取将减少,但执行时间将保持极慢:

Table 'NoCompress'. Scan count 1, logical reads 187, ...
    CPU time = 16 ms, elapsed time = 37 ms

Table 'Compress'. Scan count 1, logical reads 79, ...
    CPU time = 109 ms, elapsed time = 212 ms

或者,您可以添加PERSISTED计算列:

ALTER TABLE #Compress ADD EventType_Persisted
    AS CAST(JSON_VALUE(CAST(
            DECOMPRESS(JSON_CompressVal) AS NVARCHAR(MAX)), '$.Event')
        AS VARCHAR(200)) PERSISTED

或者基于它创建一个计算列和非聚集索引:

ALTER TABLE #Compress ADD EventType_NonPersisted
    AS CAST(JSON_VALUE(CAST(
            DECOMPRESS(JSON_CompressVal) AS NVARCHAR(MAX)), '$.Event')
        AS VARCHAR(200))

CREATE INDEX ix ON #Compress (EventType_NonPersisted)

有时网络延迟对性能的影响比我上面给出的例子要大得多。想象一下,在客户端我们可以压缩JSON GZIP数据并将其发送到服务器:

DECLARE @json NVARCHAR(MAX) = (
        SELECT t.[name]
             , t.[object_id]
             , [columns] = (
                     SELECT c.column_id, c.[name], c.system_type_id
                     FROM sys.all_columns c
                     WHERE c.[object_id] = t.[object_id]
                     FOR JSON AUTO
                 )
        FROM sys.all_objects t
        FOR JSON AUTO
    )

SELECT InitialSize = DATALENGTH(@json) / 1048576.
     , CompressSize = DATALENGTH(COMPRESS(@json)) / 1048576.

对我来说,当我试图减少其中一个项目的网络流量时,它变成了某种生命带

InitialSize    CompressSize
-------------- -------------
1.24907684     0.10125923

4. 压缩

您还可以使用数据压缩来减小表的大小。以前,压缩仅在企业版中可用。但是随着SQL Server 2016 SP1的发布,您甚至可以在Express版本上使用此功能:

USE AdventureWorks2014
GO

DROP TABLE IF EXISTS #InitialTable
DROP TABLE IF EXISTS #None
DROP TABLE IF EXISTS #Row
DROP TABLE IF EXISTS #Page
GO

CREATE TABLE #None (ID INT, Val NVARCHAR(MAX), INDEX ix CLUSTERED (ID)
    WITH (DATA_COMPRESSION = NONE))

CREATE TABLE #Row  (ID INT, Val NVARCHAR(MAX), INDEX ix CLUSTERED (ID)
    WITH (DATA_COMPRESSION = ROW))

CREATE TABLE #Page (ID INT, Val NVARCHAR(MAX), INDEX ix CLUSTERED (ID)
    WITH (DATA_COMPRESSION = PAGE))
GO

SELECT h.SalesOrderID
     , JSON_Data =
           (
                SELECT p.[Name]
                FROM Sales.SalesOrderDetail d
                JOIN Production.Product p ON d.ProductID = p.ProductID
                WHERE d.SalesOrderID = h.SalesOrderID
                FOR JSON AUTO
           )
INTO #InitialTable
FROM Sales.SalesOrderHeader h

SET STATISTICS IO, TIME ON

INSERT INTO #None
SELECT *
FROM #InitialTable
OPTION(MAXDOP 1)

INSERT INTO #Row
SELECT *
FROM #InitialTable
OPTION(MAXDOP 1)

INSERT INTO #Page
SELECT *
FROM #InitialTable
OPTION(MAXDOP 1)

SET STATISTICS IO, TIME OFF
None: CPU time = 62 ms,  elapsed time = 68 ms
Row:  CPU time = 94 ms,  elapsed time = 89 ms
Page: CPU time = 125 ms, elapsed time = 126 ms

PAGE压缩使用的算法找到相似的数据块并用较小的值替换它们。ROW压缩将它们截断为最低的数据类型,并截断多余的字符。例如,我们的列的类型为INT,它占用4个字节,但小于255的值存储在那里。对于这样的记录,类型被截断,磁盘上的数据就像是TINYINT

USE tempdb
GO

SELECT obj_name = OBJECT_NAME(p.[object_id])
     , a.[type_desc]
     , a.total_pages
     , total_mb = a.total_pages * 8 / 1024.
FROM sys.partitions p
JOIN sys.allocation_units a ON p.[partition_id] = a.container_id
WHERE p.[object_id] IN (OBJECT_ID('#None'), OBJECT_ID('#Page'), OBJECT_ID('#Row'))
obj_name   type_desc     total_pages  total_mb
---------- ------------- ------------ ---------
None      IN_ROW_DATA   1156         9.031250
Row       IN_ROW_DATA   1132         8.843750
Page      IN_ROW_DATA   1004         7.843750

5. 列存储

但我最喜欢的是ColumnStore索引,它在SQL Server中不断改进。

ColumnStore的主要思想是将表中的数据拆分为每行约100万行的RowGroups,并在该组内按列压缩数据。这会显著节省磁盘空间、减少逻辑读取并加快分析查询速度。因此,如果需要存储带有JSON信息的存档,则可以创建聚集ColumnStore索引:

USE AdventureWorks2014
GO

DROP TABLE IF EXISTS #CCI
DROP TABLE IF EXISTS #InitialTable
GO

CREATE TABLE #CCI (ID INT, Val NVARCHAR(MAX), INDEX ix CLUSTERED COLUMNSTORE)
GO

SELECT h.SalesOrderID
     , JSON_Data = CAST(
           (
                SELECT p.[Name]
                FROM Sales.SalesOrderDetail d
                JOIN Production.Product p ON d.ProductID = p.ProductID
                WHERE d.SalesOrderID = h.SalesOrderID
                FOR JSON AUTO
           )
       AS VARCHAR(8000)) -- SQL Server 2012..2016
INTO #InitialTable
FROM Sales.SalesOrderHeader h

SET STATISTICS TIME ON

INSERT INTO #CCI
SELECT *
FROM #InitialTable

SET STATISTICS TIME OFF

在这种情况下,插入表的速度将大致对应于PAGE压缩。此外,您可以使用该COMPRESSION_DELAY选项微调OLTP加载过程。

CCI: CPU time = 140 ms, elapsed time = 136 ms

SQL Server 2017之前,ColumnStore索引不支持[N]VARCHAR(MAX)数据类型,但是随着新版本的发布,我们可以在ColumnStore中存储任何长度的string

USE tempdb
GO

SELECT o.[name]
     , s.used_page_count / 128.
FROM sys.indexes i
JOIN sys.dm_db_partition_stats s ON i.[object_id] = s.[object_id] AND i.index_id = s.index_id
JOIN sys.objects o ON i.[object_id] = o.[object_id]
WHERE i.[object_id] = OBJECT_ID('#CCI')

与传统压缩相比,这样做的好处有时非常令人印象深刻:

------ ---------
CCI   0.796875

6. 创建JSON

现在让我们来看看如何生成JSON。如果您以前在SQL Server中使用过XML,那么这里的一切都以此类推。生成JSON的最简单方法是使用FOR JSON AUTO。在这种情况下,将从对象生成一个JSON数组:

DROP TABLE IF EXISTS #Users
GO

CREATE TABLE #Users (
      UserID INT
    , UserName SYSNAME
    , RegDate DATETIME
)

INSERT INTO #Users
VALUES (1, 'Paul Denton', '20170123')
     , (2, 'JC Denton', NULL)
     , (3, 'Maggie Cho', NULL)

SELECT *
FROM #Users
FOR JSON AUTO

[
    {
        "UserID":1,
        "UserName":"Paul Denton",
        "RegDate":"2029-01-23T00:00:00"
    },
    {
        "UserID":2,
        "UserName":"JC Denton"
    },
    {
        "UserID":3,
        "UserName":"Maggie Cho"
    }
]

重要的是要注意NULL值被忽略。

如果我们想将它们包含在JSON中,我们可以使用以下INCLUDE_NULL_VALUES选项:

SELECT UserID, RegDate
FROM #Users
FOR JSON AUTO, INCLUDE_NULL_VALUES

[
    {
        "UserID":1,
        "RegDate":"2017-01-23T00:00:00"
    },
    {
        "UserID":2,
        "RegDate":null
    },
    {
        "UserID":3,
        "RegDate":null
    }
]

如果您需要去掉方括号,那么该WITHOUT_ARRAY_WRAPPER选项将帮助我们解决这个问题:

SELECT TOP(1) UserID, UserName
FROM #Users
FOR JSON AUTO, WITHOUT_ARRAY_WRAPPER
{
    "UserID":1,
    "UserName":"Paul Denton"
}

如果我们想将结果与根元素结合,则ROOT为此提供了选项:

SELECT UserID, UserName
FROM #Users
FOR JSON AUTO, ROOT('Users')

{
    "Users":[
        {
            "UserID":1,
            "UserName":"Paul Denton"
        },
        {
            "UserID":2,
            "UserName":"JC Denton"
        },
        {
            "UserID":3,
            "UserName":"Maggie Cho"
        }
    ]
}

如果您需要创建具有更复杂结构的JSON,将所需名称分配给属性,将它们分组,那么您需要使用FOR JSON PATH表达式:

SELECT TOP(1) UserID
            , UserName AS [Detail.FullName]
            , RegDate AS [Detail.RegDate]
FROM #Users
FOR JSON PATH

[
    {
        "UserID":1,
        "Detail":{
            "FullName":"Paul Denton",
            "RegDate":"2017-01-23T00:00:00"
        }
    }
]
SELECT t.[name]
     , t.[object_id]
     , [columns] = (
             SELECT c.column_id, c.[name]
             FROM sys.columns c
             WHERE c.[object_id] = t.[object_id]
             FOR JSON AUTO
         )
FROM sys.tables t
FOR JSON AUTO

[
    {
        "name":"#Users",
        "object_id":1483152329,
        "columns":[
            {
            "column_id":1,
            "name":"UserID"
            },
            {
            "column_id":2,
            "name":"UserName"
            },
            {
            "column_id":3,
            "name":"RegDate"
            }
        ]
    }
]

7.检查JSON

为了检查JSON格式的有效性,有一个ISJSON函数,如果是JSON则返回1,如果不是则返回0,如果传递的是NULL则返回NULL

DECLARE @json1 NVARCHAR(MAX) = N'{"id": 1}'
      , @json2 NVARCHAR(MAX) = N'[1,2,3]'
      , @json3 NVARCHAR(MAX) = N'1'
      , @json4 NVARCHAR(MAX) = N''
      , @json5 NVARCHAR(MAX) = NULL

SELECT ISJSON(@json1) -- 1
     , ISJSON(@json2) -- 1
     , ISJSON(@json3) -- 0
     , ISJSON(@json4) -- 0
     , ISJSON(@json5) -- NULL

8. JSONValue

要从JSON中提取标量值,您可以使用以下JSON_VALUE函数:

DECLARE @json NVARCHAR(MAX) = N'
    {
        "UserID": 1,
        "UserName": "JC Denton",
        "IsActive": true,
        "Date": "2016-05-31T00:00:00",
        "Settings": [
             {
                "Language": "EN"
             },
             {
                "Skin": "FlatUI"
             }
          ]
    }'

SELECT JSON_VALUE(@json, '$.UserID')
     , JSON_VALUE(@json, '$.UserName')
     , JSON_VALUE(@json, '$.Settings[0].Language')
     , JSON_VALUE(@json, '$.Settings[1].Skin')
     , JSON_QUERY(@json, '$.Settings')

9. OpenJson

为了解析表格数据,使用了OPENJSON表函数。应该立即注意,它将只在兼容级别130和更高的基础上工作。

OPENSON函数有两种操作模式。最简单的方法是不为结果选择指定方案:

DECLARE @json NVARCHAR(MAX) = N'
    {
        "UserID": 1,
        "UserName": "JC Denton",
        "IsActive": true,
        "RegDate": "2016-05-31T00:00:00"
    }'

SELECT * FROM OPENJSON(@json)

在第二种模式中,我们可以自己描述返回结果的样子:列的名称、它们的编号、从何处获取它们的值:

DECLARE @json NVARCHAR(MAX) = N'
    [
        {
            "User ID": 1,
            "UserName": "JC Denton",
            "IsActive": true,
            "Date": "2016-05-31T00:00:00",
            "Settings": [
                 {
                    "Language": "EN"
                 },
                 {
                    "Skin": "FlatUI"
                 }
              ]
        },
        {
            "User ID": 2,
            "UserName": "Paul Denton",
            "IsActive": false
        }
    ]'

SELECT * FROM OPENJSON(@json)
SELECT * FROM OPENJSON(@json, '$[0]')
SELECT * FROM OPENJSON(@json, '$[0].Settings[0]')

SELECT *
FROM OPENJSON(@json)
    WITH (
          UserID INT '$."User ID"'
        , UserName SYSNAME
        , IsActive BIT
        , RegDate DATETIME '$.Date'
        , Settings NVARCHAR(MAX) AS JSON
        , Skin SYSNAME '$.Settings[1].Skin'
    )

如果我们的文档具有嵌套层次结构,那么以下示例将有所帮助:

DECLARE @json NVARCHAR(MAX) = N'
    [
        {
            "FullName": "JC Denton",
            "Children": [
                { "FullName": "Mary", "Male": "0" },
                { "FullName": "Paul", "Male": "1" }
            ]
        },
        {
            "FullName": "Paul Denton"
        }
    ]'

SELECT t.FullName, c.*
FROM OPENJSON(@json)
    WITH (
          FullName SYSNAME
        , Children NVARCHAR(MAX) AS JSON
    ) t
OUTER APPLY OPENJSON(Children)
    WITH (
          ChildrenName SYSNAME '$.FullName'
        , Male TINYINT
    ) c

10. 字符串拆分

随着SQL Server 2016的发布,引入了STRING_SPLIT函数。所有人都松了一口气,现在没有必要自己造轮子了。然而,还有另一种选择——我们之前考虑过的OPENJSON构造函数。让我们测试几个拆分string的选项:

SET NOCOUNT ON
SET STATISTICS TIME OFF

DECLARE @x VARCHAR(MAX) = '1' + REPLICATE(CAST(',1' AS VARCHAR(MAX)), 1000)

SET STATISTICS TIME ON

;WITH cte AS
(
    SELECT s = 1
         , e = COALESCE(NULLIF(CHARINDEX(',', @x, 1), 0), LEN(@x) + 1)
         , v = SUBSTRING(@x, 1, COALESCE(NULLIF(CHARINDEX(',', @x, 1), 0), LEN(@x) + 1) - 1)
    UNION ALL
    SELECT s = CONVERT(INT, e) + 1
         , e = COALESCE(NULLIF(CHARINDEX(',', @x, e + 1), 0), LEN(@x) + 1)
         , v = SUBSTRING(@x, e + 1, COALESCE(NULLIF(CHARINDEX(',',  @x, e + 1), 0)
         , LEN(@x) + 1)- e - 1)
    FROM cte
    WHERE e < LEN(@x) + 1
)
SELECT v
FROM cte
WHERE LEN(v) > 0
OPTION (MAXRECURSION 0)

SELECT t.c.value('(./text())[1]', 'INT')
FROM (
    SELECT x = CONVERT(XML, '<i>' + REPLACE(@x, ',', '</i><i>') + '</i>').query('.')
) a
CROSS APPLY x.nodes('i') t(c)

SELECT *
FROM STRING_SPLIT(@x, N',') -- NCHAR(1)/CHAR(1)

SELECT [value]
FROM OPENJSON(N'[' + @x + N']') -- [1,2,3,4]

SET STATISTICS TIME OFF

如果查看结果,您会注意到OPENJSON在某些情况下可能比STRING_SPLIT函数更快,更不用说使用XMLCTE了:

            500k    100k   50k    1000

------------- ------- ------ ------ ------
CTE           29407   2406   1266   58
XML           6520    1084   553    259
STRING_SPLIT  4665    594    329    27
OPENJSON      2606    506    273    19

而且,如果我们有一个高负载的OLTP,那么OPENJSONSTRING_SPLIT1000次迭代+10个逗号分隔的值)之间没有明显的区别:

CTE          = 4629 ms
XML          = 4397 ms
STRING_SPLIT = 4011 ms
OPENJSON     = 4047 ms

11. Lax & Strict

SQL Server 2005开始,可以通过使用XML SCHEMA COLLECTION从数据库端验证XML。我们为XML描述了一个模式,然后基于它,我们可以验证数据。JSON没有明确的此类功能,但有一个解决方法。

据我所知,JSON有两种类型的表达式:strictlax(默认使用)。不同之处在于,如果我们在解析时指定不存在或不正确的路径,那么对于lax表达式,我们将得到NULL,并且在strict的情况下会出现错误:

DECLARE @json NVARCHAR(MAX) = N'
    {
        "UserID": 1,
        "UserName": "JC Denton"
    }'

SELECT JSON_VALUE(@json, '$.IsActive')
     , JSON_VALUE(@json, 'lax$.IsActive')
     , JSON_VALUE(@json, 'strict$.UserName')

SELECT JSON_VALUE(@json, 'strict$.IsActive')
Msg 13608, Level 16, State 2, Line 12
Property cannot be found on the specified JSON path.

12.修改

有一个用于修改JSON内部数据的JSON_MODIFY函数。这些例子很简单,所以没有必要详细描述它们:

DECLARE @json NVARCHAR(MAX) = N'
    {
        "FirstName": "JC",
        "LastName": "Denton",
        "Age": 20,
        "Skills": ["SQL Server 2014"]
    }'

-- 20 -> 22
SET @json = JSON_MODIFY(@json, '$.Age', CAST(JSON_VALUE(@json, '$.Age') AS INT) + 2)

-- "SQL 2014" -> "SQL 2016"
SET @json = JSON_MODIFY(@json, '$.Skills[0]', 'SQL 2016')
SET @json = JSON_MODIFY(@json, 'append $.Skills', 'JSON')
SELECT * FROM OPENJSON(@json)

-- delete Age
SELECT * FROM OPENJSON(JSON_MODIFY(@json, 'lax$.Age', NULL))

-- set NULL
SELECT * FROM OPENJSON(JSON_MODIFY(@json, 'strict$.Age', NULL))
GO

DECLARE @json NVARCHAR(100) = N'{ "price": 105.90 }' -- rename
SET @json =
    JSON_MODIFY(
        JSON_MODIFY(@json, '$.Price',
            CAST(JSON_VALUE(@json, '$.price') AS NUMERIC(6,2))),
                '$.price', NULL)

SELECT @json

13.转换隐式

因此,我们开始处理最令人兴奋的事情,即与性能相关的问题。

解析JSON时,您需要记住一个细微差别——如果我们不覆盖它,OPENJSONJSON_VALUEUnicode格式返回结果。在AdventureWorks数据库中,AccountNumber列是VARCHAR数据类型:

USE AdventureWorks2014
GO

DECLARE @json NVARCHAR(MAX) = N'{ "AccountNumber": "AW00000009" }'

SET STATISTICS IO ON

SELECT CustomerID, AccountNumber
FROM Sales.Customer
WHERE AccountNumber = JSON_VALUE(@json, '$.AccountNumber')

SELECT CustomerID, AccountNumber
FROM Sales.Customer
WHERE AccountNumber = CAST(JSON_VALUE(@json, '$.AccountNumber') AS VARCHAR(10))

SET STATISTICS IO OFF

逻辑读取的差异:

Table 'Customer'. Scan count 1, logical reads 37, ...
Table 'Customer'. Scan count 0, logical reads 2, ...

由于列和函数结果之间的datatype对我们来说是不一样的,SQL Server不得不进行基于优先级的隐式转换。在这种特殊情况下,转到NVARCHAR 唉,索引列上的所有计算和转换最常导致IndexScan

如果我们明确指定类型,如在列中,我们会得到IndexSeek

14. 索引

现在让我们看看如何索引JSON对象。正如我上面提到的,与XML不同,SQL Server 2016没有为JSON添加单独的数据类型。因此,您可以使用任何string数据类型来存储它。

如果有人对XML有经验,那么请记住,对于这种格式,SQL Server中有几种类型的索引可以让您加快某些选择。对于string应该存储JSON的类型,这样的索引根本不存在。

唉,JSONB没有交付。开发团队急于发布JSON功能,并直言如下:如果您错过了速度,我们将在下一个主要版本中添加JSONB”。但是随着SQL Server 2017/2019的发布,没有添加任何新内容。

在这里,计算列可以派上用场,它可以表示JSON文档中的某些属性,您需要根据这些属性进行搜索,并基于这些列创建索引。

USE AdventureWorks2014
GO

DROP TABLE IF EXISTS #JSON
GO

CREATE TABLE #JSON (
      DatabaseLogID INT PRIMARY KEY
    , InfoJSON NVARCHAR(MAX) NOT NULL
)
GO

INSERT INTO #JSON
SELECT DatabaseLogID
     , InfoJSON = (
            SELECT PostTime, DatabaseUser, [Event], [Schema], [Object], [TSQL]
            FOR JSON PATH, WITHOUT_ARRAY_WRAPPER
         )
FROM dbo.DatabaseLog

每次都解析相同的数据不是很合理:

SET STATISTICS IO, TIME ON

SELECT *
FROM #JSON
WHERE JSON_VALUE(InfoJSON, '$.Schema') + '.' + JSON_VALUE(InfoJSON, '$.Object') =
    'Person.Person'

SET STATISTICS IO, TIME OFF
Table 'JSON'. Scan count 1, logical reads 187, ...
    CPU time = 16 ms, elapsed time = 29 ms

因此,创建一个计算列然后将其包含在索引中有时是合理的:

ALTER TABLE #JSON
    ADD ObjectName AS
        JSON_VALUE(InfoJSON, '$.Schema') + '.' + JSON_VALUE(InfoJSON, '$.Object')
GO

CREATE INDEX IX_ObjectName ON #JSON (ObjectName)
GO

SET STATISTICS IO, TIME ON

SELECT *
FROM #JSON
WHERE JSON_VALUE(InfoJSON, '$.Schema') + '.' + JSON_VALUE(InfoJSON, '$.Object') =
    'Person.Person'

SELECT *
FROM #JSON
WHERE ObjectName = 'Person.Person'

SET STATISTICS IO, TIME OFF

也就是说,SQL Server优化器非常智能,因此您无需更改代码中的任何内容:

Table 'JSON'. Scan count 1, logical reads 13, ...
    CPU time = 0 ms, elapsed time = 1 ms

Table 'JSON'. Scan count 1, logical reads 13, ...
    CPU time = 0 ms, elapsed time = 1 ms

此外,如果我们想对数组的内容或对象的整个部分进行搜索,您可以创建常规索引和全文索引。同时,全文索引没有任何特殊的处理JSON的规则,它只是将文本分成单独的标记,使用双引号、逗号、括号作为分隔符——这就是JSON结构本身的组成:

USE AdventureWorks2014
GO

DROP TABLE IF EXISTS dbo.LogJSON
GO

CREATE TABLE dbo.LogJSON (
      DatabaseLogID INT
    , InfoJSON NVARCHAR(MAX) NOT NULL
    , CONSTRAINT pk PRIMARY KEY (DatabaseLogID)
)
GO

INSERT INTO dbo.LogJSON
SELECT DatabaseLogID
     , InfoJSON = (
            SELECT PostTime, DatabaseUser, [Event], ObjectName = [Schema] + '.' + [Object]
            FOR JSON PATH, WITHOUT_ARRAY_WRAPPER
         )
FROM dbo.DatabaseLog
GO

IF EXISTS(
    SELECT *
    FROM sys.fulltext_catalogs
    WHERE [name] = 'JSON_FTC'
)
    DROP FULLTEXT CATALOG JSON_FTC
GO

CREATE FULLTEXT CATALOG JSON_FTC WITH ACCENT_SENSITIVITY = ON AUTHORIZATION dbo
GO

IF EXISTS (
        SELECT *
        FROM sys.fulltext_indexes
        WHERE [object_id] = OBJECT_ID(N'dbo.LogJSON')
    ) BEGIN
    ALTER FULLTEXT INDEX ON dbo.LogJSON DISABLE
    DROP FULLTEXT INDEX ON dbo.LogJSON
END
GO

CREATE FULLTEXT INDEX ON dbo.LogJSON (InfoJSON) KEY INDEX pk ON JSON_FTC
GO

SELECT *
FROM dbo.LogJSON
WHERE CONTAINS(InfoJSON, 'ALTER_TABLE')

15. 解析器性能

最后,我们可能来到了本文最有趣的部分。与SQL Server上的XML相比,JSON的解析速度有多快?为了回答这个问题,我准备了一系列的测试。

我们准备了两个JSONXML格式的大文件:

/*
    EXEC sys.sp_configure 'show advanced options', 1
    GO
    RECONFIGURE
    GO

    EXEC sys.sp_configure 'xp_cmdshell', 1
    GO
    RECONFIGURE WITH OVERRIDE
    GO
*/

USE AdventureWorks2014
GO

DROP PROCEDURE IF EXISTS ##get_xml
DROP PROCEDURE IF EXISTS ##get_json
GO

CREATE PROCEDURE ##get_xml
AS
    SELECT r.ProductID
         , r.[Name]
         , r.ProductNumber
         , d.OrderQty
         , d.UnitPrice
         , r.ListPrice
         , r.Color
         , r.MakeFlag
    FROM Sales.SalesOrderDetail d
    JOIN Production.Product r ON d.ProductID = r.ProductID
    FOR XML PATH ('Product'), ROOT('Products')
GO

CREATE PROCEDURE ##get_json
AS
    SELECT (
        SELECT r.ProductID
             , r.[Name]
             , r.ProductNumber
             , d.OrderQty
             , d.UnitPrice
             , r.ListPrice
             , r.Color
             , r.MakeFlag
        FROM Sales.SalesOrderDetail d
        JOIN Production.Product r ON d.ProductID = r.ProductID
        FOR JSON PATH
    )
GO

DECLARE @sql NVARCHAR(4000)
SET @sql =
    'bcp "EXEC ##get_xml" queryout "X:\sample.xml" -S ' + @@servername + ' -T -w -r -t'
EXEC sys.xp_cmdshell @sql

SET @sql =
    'bcp "EXEC ##get_json" queryout "X:\sample.txt" -S ' + @@servername + ' -T -w -r -t'
EXEC sys.xp_cmdshell @sql

检查OPENJSON,OPENXMLXQuery的性能:

SET NOCOUNT ON
SET STATISTICS TIME ON

DECLARE @xml
SELECT @xml = BulkColumn
FROM OPENROWSET(BULK 'X:\sample.xml', SINGLE_BLOB) x

DECLARE @jsonu NVARCHAR(MAX)
SELECT @jsonu = BulkColumn
FROM OPENROWSET(BULK 'X:\sample.txt', SINGLE_NCLOB) x

/*
    XML:      CPU = 891 ms, Time = 886 ms
    NVARCHAR: CPU = 141 ms, Time = 166 ms
*/

SELECT ProductID =     t.c.value('(ProductID/text())[1]', 'INT')
     , [Name] =        t.c.value('(Name/text())[1]', 'NVARCHAR(50)')
     , ProductNumber = t.c.value('(ProductNumber/text())[1]', 'NVARCHAR(25)')
     , OrderQty =      t.c.value('(OrderQty/text())[1]', 'SMALLINT')
     , UnitPrice =     t.c.value('(UnitPrice/text())[1]', 'MONEY')
     , ListPrice =     t.c.value('(ListPrice/text())[1]', 'MONEY')
     , Color =         t.c.value('(Color/text())[1]', 'NVARCHAR(15)')
     , MakeFlag =      t.c.value('(MakeFlag/text())[1]', 'BIT')
FROM @xml.nodes('Products/Product') t(c)

/*
    CPU time = 6203 ms, elapsed time = 6492 ms
*/

DECLARE @doc INT
EXEC sys.sp_xml_preparedocument @doc OUTPUT, @xml

SELECT *
FROM OPENXML(@doc, '/Products/Product', 2)
    WITH (
          ProductID INT
        , [Name] NVARCHAR(50)
        , ProductNumber NVARCHAR(25)
        , OrderQty SMALLINT
        , UnitPrice MONEY
        , ListPrice MONEY
        , Color NVARCHAR(15)
        , MakeFlag BIT
    )

EXEC sys.sp_xml_removedocument @doc

/*
    CPU time = 2656 ms, elapsed time = 3489 ms
    CPU time = 3844 ms, elapsed time = 4482 ms
    CPU time = 0 ms, elapsed time = 4 ms
*/

SELECT *
FROM OPENJSON(@jsonu)
    WITH (
          ProductID INT
        , [Name] NVARCHAR(50)
        , ProductNumber NVARCHAR(25)
        , OrderQty SMALLINT
        , UnitPrice MONEY
        , ListPrice MONEY
        , Color NVARCHAR(15)
        , MakeFlag BIT
    )

/*
    CPU time = 1359 ms, elapsed time = 1642 ms
*/

SET STATISTICS TIME, IO OFF

现在让我们来看看JSON_VALUE标量函数相对于XQuery的性能:

SET NOCOUNT ON

DECLARE @jsonu NVARCHAR(MAX) = N'[
    {"User":"Sergii Syrovatchenko","Age":28,"Skills":["SQL Server","T-SQL","JSON","XML"]},
    {"User":"JC Denton","Skills":["Microfibral Muscle","Regeneration","EMP Shield"]},
    {"User":"Paul Denton","Age":32,"Skills":["Vision Enhancement"]}]'

DECLARE @jsonu_f NVARCHAR(MAX) = N'[
   {
      "User":"Sergii Syrovatchenko",
      "Age":28,
      "Skills":[
         "SQL Server",
         "T-SQL",
         "JSON",
         "XML"
      ]
   },
   {
      "User":"JC Denton",
      "Skills":[
         "Microfibral Muscle",
         "Regeneration",
         "EMP Shield"
      ]
   },
   {
      "User":"Paul Denton",
      "Age":32,
      "Skills":[
         "Vision Enhancement"
      ]
   }
]'

DECLARE @json VARCHAR(MAX) = @jsonu
      , @json_f VARCHAR(MAX) = @jsonu_f

DECLARE @xml = N'
<Users>
    <User Name="Sergii Syrovatchenko">
        <Age>28</Age>
        <Skills>
            <Skill>SQL Server</Skill>
            <Skill>T-SQL</Skill>
            <Skill>JSON</Skill>
            <Skill>XML</Skill>
        </Skills>
    </User>
    <User Name="JC Denton">
        <Skills>
            <Skill>Microfibral Muscle</Skill>
            <Skill>Regeneration</Skill>
            <Skill>EMP Shield</Skill>
        </Skills>
    </User>
    <User Name="Paul Denton">
        <Age>28</Age>
        <Skills>
            <Skill>Vision Enhancement</Skill>
        </Skills>
    </User>
</Users>'

DECLARE @i INT
      , @int INT
      , @varchar VARCHAR(100)
      , @nvarchar NVARCHAR(100)
      , @s DATETIME
      , @runs INT = 100000

DECLARE @t TABLE (
      iter INT IDENTITY PRIMARY KEY
    , data_type VARCHAR(100)
    , [path] VARCHAR(1000)
    , [type] VARCHAR(1000)
    , time_ms INT
)

SELECT @i = 1, @s = GETDATE()
WHILE @i <= @runs
    SELECT @int = JSON_VALUE(@jsonu, '$[0].Age')
         , @i += 1
INSERT INTO @t
SELECT '@jsonu', '$[0].Age', 'INT', DATEDIFF(ms, @s, GETDATE())

SELECT @i = 1, @s = GETDATE()
WHILE @i <= @runs
    SELECT @int = JSON_VALUE(@jsonu_f, '$[0].Age')
         , @i += 1
INSERT INTO @t
SELECT '@jsonu_f', '$[0].Age', 'INT', DATEDIFF(ms, @s, GETDATE())

SELECT @i = 1, @s = GETDATE()
WHILE @i <= @runs
    SELECT @int = JSON_VALUE(@json, '$[0].Age')
         , @i += 1
INSERT INTO @t
SELECT '@json', '$[0].Age', 'INT', DATEDIFF(ms, @s, GETDATE())

SELECT @i = 1, @s = GETDATE()
WHILE @i <= @runs
    SELECT @int = JSON_VALUE(@json_f, '$[0].Age')
         , @i += 1
INSERT INTO @t
SELECT '@json_f', '$[0].Age', 'INT', DATEDIFF(ms, @s, GETDATE())

SELECT @i = 1, @s = GETDATE()
WHILE @i <= @runs
    SELECT @int = @xml.value('(Users/User[1]/Age/text())[1]', 'INT')
         , @i += 1
INSERT INTO @t
SELECT '@xml', '(Users/User[1]/Age/text())[1]', 'INT', DATEDIFF(ms, @s, GETDATE())

SELECT @i = 1, @s = GETDATE()
WHILE @i <= @runs
    SELECT @nvarchar = JSON_VALUE(@jsonu, '$[1].User')
         , @i += 1
INSERT INTO @t
SELECT '@jsonu', '$[1].User', 'NVARCHAR', DATEDIFF(ms, @s, GETDATE())

SELECT @i = 1, @s = GETDATE()
WHILE @i <= @runs
    SELECT @nvarchar = JSON_VALUE(@jsonu_f, '$[1].User')
         , @i += 1
INSERT INTO @t
SELECT '@jsonu_f', '$[1].User', 'NVARCHAR', DATEDIFF(ms, @s, GETDATE())

SELECT @i = 1, @s = GETDATE()
WHILE @i <= @runs
    SELECT @varchar = JSON_VALUE(@json, '$[1].User')
         , @i += 1
INSERT INTO @t
SELECT '@json', '$[1].User', 'VARCHAR', DATEDIFF(ms, @s, GETDATE())

SELECT @i = 1, @s = GETDATE()
WHILE @i <= @runs
    SELECT @varchar = JSON_VALUE(@json_f, '$[1].User')
         , @i += 1
INSERT INTO @t
SELECT '@json_f', '$[1].User', 'VARCHAR', DATEDIFF(ms, @s, GETDATE())

SELECT @i = 1, @s = GETDATE()
WHILE @i <= @runs
    SELECT @nvarchar = @xml.value('(Users/User[2]/@Name)[1]', 'NVARCHAR(100)')
         , @i += 1
INSERT INTO @t
SELECT '@xml', '(Users/User[2]/@Name)[1]', 'NVARCHAR', DATEDIFF(ms, @s, GETDATE())

SELECT @i = 1, @s = GETDATE()
WHILE @i <= @runs
    SELECT @varchar = @xml.value('(Users/User[2]/@Name)[1]', 'VARCHAR(100)')
         , @i += 1
INSERT INTO @t
SELECT '@xml', '(Users/User[2]/@Name)[1]', 'VARCHAR', DATEDIFF(ms, @s, GETDATE())

SELECT @i = 1, @s = GETDATE()
WHILE @i <= @runs
    SELECT @nvarchar = JSON_VALUE(@jsonu, '$[2].Skills[0]')
         , @i += 1
INSERT INTO @t
SELECT '@jsonu', '$[2].Skills[0]', 'NVARCHAR', DATEDIFF(ms, @s, GETDATE())

SELECT @i = 1, @s = GETDATE()
WHILE @i <= @runs
    SELECT @nvarchar = JSON_VALUE(@jsonu_f, '$[2].Skills[0]')
         , @i += 1
INSERT INTO @t
SELECT '@jsonu_f', '$[2].Skills[0]', 'NVARCHAR', DATEDIFF(ms, @s, GETDATE())

SELECT @i = 1, @s = GETDATE()
WHILE @i <= @runs
    SELECT @varchar = JSON_VALUE(@json, '$[2].Skills[0]')
         , @i += 1
INSERT INTO @t
SELECT '@json', '$[2].Skills[0]', 'VARCHAR', DATEDIFF(ms, @s, GETDATE())

SELECT @i = 1, @s = GETDATE()
WHILE @i <= @runs
    SELECT @varchar = JSON_VALUE(@json_f, '$[2].Skills[0]')
         , @i += 1
INSERT INTO @t
SELECT '@json_f', '$[2].Skills[0]', 'VARCHAR', DATEDIFF(ms, @s, GETDATE())

SELECT @i = 1, @s = GETDATE()
WHILE @i <= @runs
    SELECT @varchar = @xml.value('(Users/User[3]/Skills/Skill/text())[1]', 'VARCHAR(100)')
         , @i += 1
INSERT INTO @t
SELECT '@xml', '(Users/User[3]/Skills/Skill/text())[1]', 'VARCHAR', DATEDIFF(ms, @s, GETDATE())

SELECT * FROM @t

结果:

data_type  path                                    type      2016 SP2(CU17) 2017(CU25) 2019(CU11)
---------- -------------------------------------- --------- ------------- ----------- -----------
@jsonu     $[0].Age                               INT       1030          314         250
@jsonu_f   $[0].Age                               INT       1014          333         290
@json      $[0].Age                               INT       1100          393         337
@json_f    $[0].Age                               INT       1166          444         396
@xml       (Users/User[1]/Age/text())[1]          INT       28037         30513       34427

@jsonu     $[1].User                              NVARCHAR  1223          513         490
@jsonu_f   $[1].User                              NVARCHAR  1367          647         673
@json      $[1].User                              VARCHAR   1343          623         604
@json_f    $[1].User                              VARCHAR   1527          790         800
@xml       (Users/User[2]/@Name)[1]               NVARCHAR  22487         25094       26840
@xml       (Users/User[2]/@Name)[1]               VARCHAR   22816         25190       26676

@jsonu     $[2].Skills[0]                         NVARCHAR  1654          856         814
@jsonu_f   $[2].Skills[0]                         NVARCHAR  1926          1124        1103
@json      $[2].Skills[0]                         VARCHAR   1790          960         907
@json_f    $[2].Skills[0]                         VARCHAR   2090          1250        1196
@xml       (Users/User[3]/Skills/Skill/text())[1] VARCHAR   23530         25426       27840

另一个有趣的细微差别:您不需要混合JSON_VALUEOPENJSON调用。此外,尝试仅指定解析后真正需要的那些列。使用JSON,一切都非常简单——你需要解析的列越少,我们得到结果的速度就越快:

SET NOCOUNT ON
SET STATISTICS TIME ON

DECLARE @json NVARCHAR(MAX)
SELECT @json = BulkColumn
FROM OPENROWSET(BULK 'X:\sample.txt', SINGLE_NCLOB) x

SELECT COUNT_BIG(*)
FROM OPENJSON(@json)
WITH (
      ProductID INT
    , ProductNumber NVARCHAR(25)
    , OrderQty SMALLINT
    , UnitPrice MONEY
    , ListPrice MONEY
    , Color NVARCHAR(15)
)
WHERE Color = 'Black'

SELECT COUNT_BIG(*)
FROM OPENJSON(@json) WITH (Color NVARCHAR(15))
WHERE Color = 'Black'

SELECT COUNT_BIG(*)
FROM OPENJSON(@json)
WHERE JSON_VALUE(value, '$.Color') = 'Black'

/*
    2016 SP2(CU17):

    CPU time = 1140 ms, elapsed time = 1144 ms
    CPU time = 781 ms, elapsed time = 789 ms
    CPU time = 2157 ms, elapsed time = 2144 ms

    2017(CU25):

    CPU time = 1016 ms, elapsed time = 1034 ms
    CPU time = 718 ms, elapsed time = 736 ms
    CPU time = 1282 ms, elapsed time = 1286 ms

    2019(CU11):

    CPU time = 998 ms, elapsed time = 1011 ms
    CPU time = 721 ms, elapsed time = 745 ms
    CPU time = 1301 ms, elapsed time = 1316 ms
*/

结论

  • JSON解析数据比从XML解析数据快210倍。
  • JSON存储通常比XML存储更冗余。
  • Unicode解析JSON数据的速度提高了5-15%
  • 使用JSON时,与XML相比,您可以显著减少服务器CPU上的负载。
  • SQL Server 2017开始,显著加快了从JSON解析标量值的速度。
  • 如果您可以摆脱XML而使用JSO——去做吧!

Git

您可以从我的Git中获取所有新的代码示例。谢谢!

https://www.codeproject.com/Articles/5308904/SQL-Server-JSON-Performance-Cookbook

猜你喜欢

转载自blog.csdn.net/mzl87/article/details/121625363