Interpretation | Meichuang is deeply involved in the release and implementation of 5 data security standards in the telecommunications and Internet industries

The promulgation and implementation of laws and regulations such as the "Data Security Law" and the "Personal Information Protection Law" adhere to the principle of equal emphasis on security and development, actively respond to complex and severe security risks and challenges, and accelerate the construction of a data security system, which has become an important issue in the telecommunications and Internet industries. Work.

"Safety development, standards first", standardization work is an important foundation for ensuring data security.

Recently, five data security standards for the telecommunications and Internet industries drafted by the China Communications Standards Association and Meichuang Technology participated in were officially implemented on August 1.

  • "Technical Requirements and Test Methods for Classification and Grading of Telecom Network and Internet Data"

  • "Technical Requirements and Test Methods for Data Desensitization in Telecom Networks and Internet"

  • "Technical Requirements and Testing Methods for Abnormal Behavior Monitoring of Telecom Network and Internet Data"

  • "Technical Requirements and Testing Methods for Auditing Telecom Networks and Internet Databases"

  • "Technical Requirements and Test Methods for Data Security of Telecom Network and Internet Application Programming Interfaces"

Today, the 2023 National Cyber ​​Security Awareness Week-Telecommunications Day, this article will interpret and share the main contents of 5 telecommunications and Internet industry data security standards.


"Technical Requirements and Test Methods for Classification and Grading of Telecom Network and Internet Data"

The document stipulates the relevant requirements and corresponding test methods for the technologies required in the classification and grading process of telecommunications network and Internet data.

The document is suitable for the design, development, testing and evaluation of relevant technical tools or products by enterprises, organizations or institutions in the process of implementing data classification and grading. 

Data classification and classification technology 

According to the process of telecom and Internet data classification and grading in "YD/T3813-2020 Basic Telecom Enterprise Data Classification and Grading Method", the document divides telecom and Internet data classification and grading technology into three aspects: data resource sorting, data classification, classification, identification and annotation, Data classification and classification results management. The correspondence between each module and the classification and grading work process is as follows:

picture

Correspondence between data classification and grading technology and data classification and grading process

 Technical requirements for data classification and classification

◼︎Data  source sorting

Active sniffing data source discovery

Should support sending detection messages to a given IP address/address segment and a given port/port range; should support relational databases, non-relational databases, data source discovery using file storage, and sharing protocols; should support periodic automatic Trigger, event trigger, manual trigger or other startup method.

Passive monitoring data source discovery

It should support the monitoring and analysis of the traffic of the network where the data source is located; it should support the discovery of relational databases, non-relational databases, data sources built using file storage, sharing protocols, and dynamic data sources.

Data source information management

It should support manual entry or batch import of data source information; it should support the editing and deletion of data source information; it should support the display of data source sorting results, and the display format should be a list or icon.

◼︎Data  classification, classification, identification and annotation

Classification and classification mapping template formulation

Classification and grading mapping templates should be developed based on the existing classification and grading methods; the grading mapping template should develop several levels based on the grading method and be associated with the minimum subcategory of classification or other level subcategories.

Data identification

It should have the ability to identify data in the data source; when identifying structured data, appropriate algorithms should be used according to the characteristics of the data; when identifying unstructured data, it should support text, pictures, office documents and other formats support.

Data annotation

It should have the ability to update existing data classification and hierarchical mapping; the annotation process should not cause destructive effects such as modification or deletion of data source data.

◼︎Data  classification and classification result management

Results display 

The presentation of aggregated data from a single or multiple data sources should be supported. The results displayed include but are not limited to data source name, data source address, library name, table name, field name, file type, file name, classification, and category. The results should be displayed in the form of visual tables, statistical charts, etc.

Results updated

Results updates should support updating the latest scan results into the data classification and grading results.

Result query

It should support visual query; it should support the query of data classification and grading results using query conditions; it should support the query of data classification and grading results through the interface; it should support the export of query results, and the format of the export file should be one or more of the following: text , office documents, pictures, compressed files, etc.

The above are the main contents of the technical requirements part of the "Technical Requirements and Test Methods for Classification and Grading of Telecom Network and Internet Data". Corresponding to each technical requirement, the standard also clarifies the supporting test methods.


"Technical Requirements and Test Methods for Data Desensitization in Telecom Networks and Internet"

The document stipulates the technical requirements and test methods for data desensitization in telecommunications networks and the Internet.

The document is applicable to the desensitization work of telecommunications network and Internet data, the design, research and development, testing, evaluation and acceptance of desensitization technical capabilities, including data desensitization providers, users, evaluation institutions and regulatory agencies.

Data desensitization application architecture 

picture

Data desensitization application architecture

 Technical requirements for data desensitization

◼︎Safety  function requirements

Data source support

It should support data source input verification, should support at least one database and data warehouse, should support at least one file type of data, and should support at least one big data platform and one dynamic data flow.

Desensitization algorithm

The target desensitization result of the desensitized object should be clearly stated; the configuration of the processing method of abnormal data should be supported; algorithms such as hashing, random replacement, truncation, masking, and generalization should be used first to achieve anonymization desensitization according to different scenarios.

Desensitization rules

The masking rules should be clearly stated to avoid cross-business and cross-organization data masking effects from canceling each other out; masking field presetting and customization functions should be supported; masking rule management functions should be supported, such as creation, query, editing, deletion, etc.; masking rules should be supported. Supports the configuration function of desensitization rule parameters (specifically divided into reversible and irreversible types).

desensitization strategy

It should support the customization of desensitization policies; it should support customized sensitive data discovery and templates corresponding to desensitization rules; the configuration of desensitization policies should support fine-grained data; it should support control through data subsets, blacklists, whitelists, etc. Data desensitization scope; management of desensitization policies should be supported; desensitization-related behaviors should be fully logged.

Results management

It should support the display of summary of single or multiple desensitization tasks; the result display format should be visual tables, statistical charts, etc.; it should support visual query and filter the data desensitization results through query conditions; it should support the data desensitization results through the interface. query; it should support exporting query results, and the format of the exported file should be one or more of the following: text, office documents, pictures, compressed files, etc.

◼︎Business  scenario requirements

static desensitization

Development and testing: When using real data in business systems, desensitization methods need to be used to ensure that sensitive data is not leaked.

Data sharing analysis: It is necessary to process specific sensitive data and retain some sensitive data, so as to realize data sharing or distribute some data to third parties or upstream organizations.

dynamic desensitization

Database operation and maintenance desensitization: Operation and maintenance personnel directly connect to the production database during operation and maintenance work, and can query sensitive data in the database. There is a risk of sensitive data leakage. The sensitive data queried needs to be desensitized according to different user permissions.

Desensitization of the front-end of the business system: Sensitive data exists in the front-end page of the business system. When business users log in to the business system and access sensitive pages, there is a risk of sensitive data leakage. The sensitive data in the sensitive pages needs to be desensitized according to different user permissions.

Application programming interface (API) desensitization: Obtain business data in real time through API. When sensitive data is transmitted in the interface, there is a risk of sensitive data leakage. The sensitive data in the interface needs to be desensitized according to the different permissions of the API caller. .

◼︎Self  -safety requirements

Security operation and maintenance

Operation and maintenance management capabilities: It should support management functions such as the creation, modification and deletion of users and user groups, and be able to assign and change different roles and permissions according to actual needs. It should also support the modification of user passwords and user information; it should support the system Abnormal discovery and alarm; should provide a visual view of the list and running status of the overall desensitization job of the system.

Security capabilities: It should be able to identify and authenticate user identities, and the identity identifier should be unique; permission management should be based on the principle of "separation of three powers" to separate the roles of desensitization management, use, and audit personnel.

Expand capabilities

Data desensitization technology should be able to improve desensitization capabilities and performance by implementing expansion strategies, providing open interfaces, and having the ability to link with other devices, tools, systems, and platforms.


" Technical Requirements and Testing Methods for Abnormal Behavior Monitoring of Telecom Network and Internet Data "

The document stipulates the requirements and testing methods for abnormal behavior monitoring technologies for telecommunications networks and the Internet in the communications industry.

The document is applicable to abnormal behavior monitoring work in telecommunications networks and the Internet in the communications industry, as well as the design, research and development, testing, evaluation and acceptance of abnormal behavior monitoring technical capabilities, including providers, users, evaluation institutions and regulatory agencies of abnormal behavior monitoring.

Overall framework of data abnormal behavior monitoring technology 

picture

Overall framework of data abnormal behavior monitoring technology

Technical requirements for abnormal data behavior monitoring 

◼︎Technical  requirements for monitoring data collection

Monitoring data source type

Support capabilities for different types of databases and communication protocols should be provided.

Collection method

One or more data collection methods should be supported to meet flexible deployment in common scenarios such as office networks, production networks, public clouds, and private clouds.

Collection and transmission

When the collected data is transmitted to the platform for centralized storage and analysis, data transmission security should be ensured through identity authentication, encrypted transmission, data integrity verification, data breakpoint retransmission mechanism, etc.

◼︎Technical  requirements for abnormal behavior data processing

data analysis

It should support the parsing of data collected through different protocols and formats into a unified metadata structure; it should support the efficient and complete restoration of data packets in non-encrypted protocols in the data source; it should support pre-configured methods based on predefined data formats. Parse data; it should support converting imported data into corresponding data models based on different data types and data sources; it should support custom policy configuration to completely and accurately identify key information; it should support the use of machine learning, deep learning and other technologies.

Data cleaning

It should support filtering data according to abnormal behavior analysis requirements and extract data that is valuable for analysis; it should support the standardization of field values; it should support cleaning of erroneous data or other dirty data; and it should have data aggregation operations.

Data sorting

It should support user identity parsing; it should support device parsing; it should support filtering, correlating, transforming data, and constructing feature engineering; it should be able to output training data sets and test data sets for abnormal behavior analysis.

data storage

It should support structured, semi-structured and unstructured data storage; it should support distributed storage and have big data storage capabilities; an encryption mechanism should be adopted to ensure the confidentiality of sensitive data; a verification mechanism should be adopted to ensure the integrity of monitoring data; it should be Set access permissions and limit the use of monitoring data according to permissions; backup and recovery capabilities should be available.

◼︎Abnormal  behavior analysis technical requirements

Overall requirements for abnormal behavior analysis

It should support the analysis of abnormal use of data and abnormal user behavior based on the data processing result set, and form a data security analysis report; it should support the construction of personalized data for different business scenarios and different account types based on rule engines, model engines, and scenario analysis. An abnormal behavior risk identification model; it should support the expansion capability of abnormal behavior identification; it should support both real-time analysis and offline analysis modes; it should support the ability to visualize and self-service analysis of abnormal behavior; it should support tag mining for users.

rules engine

Rule conflict detection should be supported; changes to predefined rules should be supported; rule engines can be divided into offline rule engines and dynamic baseline engines.

model engine

It should provide a variety of algorithm support, covering supervised and unsupervised learning models; it should be able to search for information hidden in a large amount of data through statistics, online analysis and processing, intelligence retrieval, pattern recognition and many other methods; it should support Provide correlation analysis of a large number of different complex events under distributed deployment; dynamic updating of abnormal behavior identification models should be supported.

Scenario analysis

宜支持从内外部攻击方式或数据资产风险因素等维度对数据生命周期过程中的异常行为进行场景分析或对信息系统应用过程进行场景分析;宜支持按照规则引擎、基线比对或模型检测技术对已经或将要产生实质危害的行为定性为安全事件;宜能够支持异常行为场景初始配置或基线规则的优化变更,以及上下线管理。

◼︎ 异常行为告警与响应技术要求

告警要求

当监测到异常行为时,应自动发出安全警告; 告警应包含事件级别、事件内容、事件时间、事件主体、事件客体、处置建议;告警功能应提供实时查询、批量导出或开放性接口等方式。

告警方式

告警方式应包括屏幕实时提示、邮件告警、短信告警和声音告警等方式;告警宜采用可视化展示和全景展示功能。

告警策略

应允许管理员自定义安全策略,对指定事件不子告警或定制响应方式;应对高频度发生的相同安全事件进行合并告警,避免出现告警风暴;针对大规模安全告警,宜通过关联分析算法,提高告警分析效率。

告警处置

告警处置应支撑手动执行和自动执行;告警功能应支持记录和归档,以支持后续安全审计。

阻断能力

宜实现应用层协议、网络层协议,主动阻断能力;宜实现与防火墙\DNS\IPS\网关,主动阻断能力。

联动能力

宜实现与其它网络设备或网络安全部件,按照设定的策略进行联动的能力;宜实现与其它安全管控平台(如安管平台、认证系统) 按照设定的策略进行联动的能力。


电信网和互联网数据库审计技术要求与测试方法》

文件规定了电信网和互联网数据库审计产品、工具、系统、平台的技术要求与测试方法。

文件适用于电信网和互联网数据库审计技术能力的设计、研发、测试、评估和验收等,包括数据库审计的提供商、用户、测评机构和监管机构等。

 数据库审计应用架构

picture

数据库审计应用架构

数据源管理支持人工编辑,以及基于地址和端口扫描的主动嗅探和基于网络流量协议分析的被动监测两种自动发现方式。

审计策略管理支持用户选用基于系统内嵌模板的默认规则和根据自身需求创建自定义规则,并具备配置风险识别的各项参数及告警响应机制的能力。

审计引擎能够根据数据源和审计策略配置,从网络流量(基于数据库协议解析和语法分析技术)和插件(部署在数据库服务器上)提交的信息中采集和存储日志记录,并为日志建立好索引,还应支持对日志记录进行检索查询、稽核审查和定期生成统计报表等功能,还宜支持具备自动分析发现安全风险,并按照响应机制进行告警的能力。

数据库审计工具/系统/平台自身具备用户权限管理和访问控制功能,并提供图形化管理界面和能够与外部系统对接的调用接口。

 数据库审计主要功能

实时记录网络上的数据库的活动,对数据库操作进行合规性管理,具备对数据库遭受到的风险行为进行告警,可以有能力对攻击行为进行阻断等。

通过对用户访问数据库行为的记录、分析和汇报,具备生成合规报告、事故追根溯源等功能,达到加强内外部数据库网络行为记录,提高数据资产安全。

具备数据源管理、审计策略、用户权限管理、告警响应、接口调用、审计结果查询与展示功能,支持多类数据库和数据仓库的审计等。

具备自身安全管理功能,包括身份鉴权、访问控制、插件安全和数据保护措施等。

美创科技参与起草的GB/T 20945-2023《信息安全技术 网络安全审计产品技术规范》国家标准,明确了数据库审计产品的技术要求和测试方法,新国标将于12月1日正式实施,后续美创科技将结合国家标准以及《电信网和互联网数据库审计技术要求与测试方法》,对数据库审计技术要求进行深度解读。


《电信网和互联网应用程序接口数据安全技术要求和测试方法》

文件规定了电信网和互联网应用程序接口 (API)数据安全的技术要求,并提供了相应测试方法、判定准则等。

文件适用电信网和互联网API相关的开发者、运营者及专业测评机构开展API数据安全测试工作,为提升API数据安全水平,强化测试能力、健全技术手段提供指引和依据。本文件不适用于操作系统接口、硬件接口。

 应用程序接口 (API)安全技术要求

◼︎ 身份认证

认证机制

应设置API用户身份认证机制,通过如密钥、令牌、证书、静/动态口令等手段实现有效的用户身份验证。

安全策略

应具备口令复杂度策略,限制或禁止常见弱口令;应具备身份认证操作锁定策略,对错误尝试次数进行限制;应具备凭证有效期策略,要求定期对静态身份凭证进行修改。

◼︎ 接入授权

授权机制

应具备API用户授权机制,并基于不同用户身份分配其可访问的服务、资源及可执行操作。

会话管理

宜具备API访问会话管理机制,当用户会话过期或被放弃,应及时锁定/注销用户并终止会话。

◼︎ 访问控制

粗粒度访问控制

应具备针对未授权访问的识别和阻断能力,实现基于如访问时间、访问次数、用户IP等要素的粗粒度API访问控制。

细粒度访问控制

宜通过ACL、RBAC等技术,实现基于如签名、时间戳、黑白名单等机制的细粒度API资源访问控制。

令牌添加变量

应对API会话令牌等重要身份验证凭证进行添加变量操作,以防范重放、彩虹表碰撞等攻击。

二次认证机制

涉及重要数据或个人信息的API,宜部署基于动态口令、oAuth(2.0及以上)等技术的二次认证机制。

◼︎ 数据传输

请求参数校验

宜部署具备请求参数(如名称、参数类型、取值范围等) 校验能力的API管控机制。

数据资源安全

宜对API请求参数的规则、配置、URL等进行转义编码,避免关键地址、参数、字段泄露或存在歧义。

安全传输策略

涉及重要数抓或个人信息的API,应在数据传输中采用如加密、签名等手段对关键参数、数据内容等进行保护。

◼︎ 数据脱敏

脱敏能力

涉及重要数据或个人信息等的API,宜部署数据脱敏工具并根据API功能及用户配置相应脱敏策路,以实现返回数据有效脱敏。

脱敏有效性

API返回数据脱敏效果应与脱敏规则一致,确保数据脱敏的有效性、真实性高效性和多样性。

◼︎ 数据筛选

返回数据筛选

宜根据API功能及数据类型对返回数据类型、格式等进行筛选,避免返回数据量及数据内容超出API设计要求。

差异化管控

宜根据API涉及的数据类型及数据分类分级情况对接口进行梳理,形成差异化管控。

重要数据识别

涉及重要数据或个人信息等的API,宜部署管控机制对返回内容进行识别并设置有效的告警、阻断策略。

◼︎ 攻击防护

自动攻击防护

应具备同一用户或IP地址认证失败重试次数的限制,防范针对API的暴力破解攻击;应具备对爬虫或扫描器等的数据爬取行为的监测识别能力;应具备对API访问速率和连接数管控机制。

基于API模式的攻击防护

对API请求,应建立基于OpenAPI模式规范的安全防护机制;对基于XML SOAP的API请求,宜建立基于WSDL模式规范的安全防护机制。

数据传输防护

应对API远程数据传输采用TLS/TLCP加密技术,确保请求真实性;宜根据API功能及涉及数据类型通过安全管控工具、系统、平台等实现对数据传输行为的监控,防范非法访问、攻击行为等。

注入攻击防护

应防范针对API的注入漏洞攻击,如通过API参数实现的SQL注入漏洞、命令注入漏洞、LDAP注入漏洞等。

◼︎ 安全监测

异常行为监测

应形成API访问行为基线,针对如高频登陆尝试、爬虫访问、访问源IP地址异常、特权账号登陆、访问频次超标、批量下载等异常访问行为进行监测、记录,并阻断高危异常访问。

特权账号监测

宜对API特权用户行为进行监测、记录,防范特权用户滥权操作。

合作访问监测

宜对API合作方用户异常行为进行监测,防范合作方用户越权操作。

◼︎ 进退网管理

进网管理

应具备API上线审核机制,确保在满足安全要求下进行审核发布并记录形成API资产清单。

退网管理

There should be an API offline approval system and network withdrawal management should be carried out based on API association conditions.

◼︎Security  audit

logging

Complete logging of API access, operations, alarms, etc. should be carried out.

Log audit

Security audits of API logs should be conducted regularly to ensure log integrity and availability and form audit reports.

Log traceability

The log content should directly or indirectly contain information such as the source, route, type, and scale of data leakage to meet the need for traceability of API-related data leakage events.


Since its establishment, Meichuang Technology has continued to provide data security technologies, products and services to my country's telecommunications and Internet industries. Based on its long-term accumulation in the field of data security, it has actively participated in the formulation of standards for the telecommunications and Internet industries, and contributed to the standardization and standardization development of the industry. strength.

Guess you like

Origin blog.csdn.net/meichuangkeji/article/details/132851733