IQIYI Data Warehouse Platform and Service Construction Practice

This article is compiled based on the live speech of [i Technology Conference]

First, I will introduce the overall business situation of iQiyi and the design and problems of Data Warehouse 1.0. In view of the defects of Data Warehouse 1.0, how to evolve to Data Warehouse 2.0 architecture and the problems that need to be solved and the goals to be achieved for Data Warehouse 2.0 .

This picture clearly shows the product matrix of iQiyi. In the early days, iQiyi was a video business. Later, some new businesses were derived from the video business. The video business was mainly centered around the core IP, and short video, Many businesses such as small video, Qibabu, iqiyi reading, petit, bubble, Qixiu live broadcast, iqiyi knowledge, sports, e-commerce, etc., have built a pan-entertainment ecological matrix from apple trees to apple orchards.

It can be seen that there are many businesses involved in the product matrix, and each business generates its own data and also has its own unique product form. It not only needs to meet the requirements of business-oriented data exploration and analysis in a specific business scenario, but also extracts and refines common data from the perspective of multiple business commonality based on cross-multiple business scenarios to achieve cross-business horizontal Explore and analyze to achieve the goal of guiding business and empowering business with data. At the same time, each business will assist and function with each other, resulting in frequent data interactions between each business.

Data warehouse 1.0

The architecture diagram of data warehouse 1.0 is as above. The overall layering is divided into 5 parts. The bottom is the original data layer, and the top is the detail layer, aggregation layer and application layer. On the right is the dimension layer facing the entire data warehouse for management. Consistency dimension.

1. The original data layer is used to save the original data. The data comes from various data production systems. It is mainly divided into three parts: Pingback delivery, a unified and standardized burying point in each business product, and then the collected data is reported. Finally, the embedded point data is analyzed and stored through automated processing; the business database is mainly the data generated at the back end of the business, such as membership orders, literature orders, etc., through data integration, the data in the business database is directly synchronized to the original data layer Save; third-party external data, mainly from data sources outside the company.

2. The detail layer is used to restore the business process, save the most fine-grained data, perform ETL processing on the original data according to different modes, and complete the process of data cleaning and partial business logic processing.

3. The aggregation layer stores non-detailed data, usually lightly aggregated and heavily aggregated data obtained after various calculations, which are mainly constructed using dimensional modeling methods.

4. The application layer is the resultant data generated to meet business needs. It is highly customizable and is mainly provided to related data applications, external systems, and people who need specific data. It is the interface between the data warehouse and the outside, which is mainly connected to other systems, such as business database, report system, etc.

Looking at the data warehouse 1.0 architecture, the entire data warehouse system is actually built from a business perspective. Each business will establish a small data warehouse that meets its own business characteristics, which can quickly respond to business needs and be flexible and changeable. Support business decisions. However, as data grows and business scenarios become more complex, there is a lack of extraction and aggregation of public data, and problems such as chimney-like repetitive construction, inconsistent indicator calibers, data ambiguity, low production/use efficiency, and lack of tools and platforms Favorable support.

When there are data crossover scenarios between different businesses, in order to respond to business needs as soon as possible, data are obtained directly from other business detail layers or even the original data layer. At this time, it is easy to have inconsistencies in the statistical caliber of indicators. The lack of precipitation and accumulation of public aggregated data has caused many data overlaps, resulting in repeated chimney construction and high resource consumption costs. In the construction of Data Warehouse 1.0, although there is a relatively complete set of data warehouse specifications, it lacks the support and control of the tool platform, and often has the same name with different meanings, or the ambiguity of the indicator caliber, which increases the downstream use cost and Development efficiency. A typical scenario is that there is a dimension under business line A and another dimension under business line B, but these two dimensions have the same name, and the meaning or attributes represented under each business line are different. This At that time, if you want to do some data cross-exploration for these two businesses, it will consume a lot of offline communication and troubleshooting costs.

Data Warehouse 2.0

In response to the flaws of 1.0, we upgraded the data warehouse architecture and gradually evolved into the data warehouse 2.0 era. At the beginning of the design of Data Warehouse 2.0, the general direction and goals need to be clarified.

1. Clarify the layering and composition, as well as the positioning and interrelationship of different parts.

2. Standardize and standardize the entire data modeling process. Because of the lack of tool support, the necessity and importance of data modeling work could not be reflected. Therefore, a complete tool platform is provided to realize the overall construction of data warehouse 2.0 from 0 to 1 .

3. This is a topic that can never be circumvented, and the data must be kept uniform. No matter which product line or business line comes from, the statistical caliber should be consistent, clear, and there is no difference.

4. Improve efficiency, combine the first three points, clear the entire data warehouse structure, minimize the horizontal complex interactions between data, keep the data flow clear and concise, and solve the chimney-like repetitive construction, thereby improving production and use efficiency and reducing cost.

The birth of Data Warehouse 2.0 is to solve various data problems faced by the 1.0 era and make data work generate greater value. Through unified caliber, standardized naming, the establishment of a unified index and dimension system, and the use of standardized modeling methods, we can open up and standardize company data, while sinking general logic to improve computing efficiency and reduce usage costs; supporting multiple data centers Tools can allow more people to participate in data use and analysis, so that data decision-making can penetrate into every corner of the company and achieve the ultimate goal of data-driven development.

Looking at the entire data warehouse 2.0 architecture, there is little difference from 1.0 in terms of layering. It is mainly divided into the original data layer, the detail layer, the summary layer, the application layer, and the unified dimension layer. The warehouse composition has been greatly adjusted and divided into three parts, namely the unified data warehouse, the business mart, and the theme data warehouse. At the same time, the division of labor, positioning and mutual data reference flow of each component is also clarified.

The bottom layer is the unified data warehouse, which is mainly divided into a unified detailed data layer and a unified aggregated data layer. The detail layer is responsible for docking all the original data of the lower layer, restoring 100% of the data of all business domains and business processes, and shielding the impact of changes in the original data of the lower layer on the upper layer. It is the foundation of the entire data warehouse 2.0. Complete the logic conversion from business relationship to data relationship through the detail layer, supplement related dimensions, save the most fine-grained data, perform ETL processes such as complex business logic separation, data cleaning, and unified and standardized data formats. The aggregation layer is responsible for precipitating common indicators, providing upward calculation indicators with a uniform caliber, and avoiding double calculations. In addition, a unified cumulative equipment library based on the OneID system and a newly-added equipment library will be provided for upper-level use.

The business bazaar mainly focuses on business demands, and builds various data collections that meet business analysis. In the process of building business marts, we divide business marts as fine as possible, and there will be no data dependence and horizontal reference between each business mart. At the application layer, cross-markets can be aggregated and calculated for external provision. data service. The advantage of this is that if there are some organizational structure adjustments or changes in job responsibilities, each business market does not need to be adjusted, and only needs to be modified at the application layer. At the same time, it also avoids the mixing of computing task codes, Data change costs caused by issues such as data authority splitting.

The theme data warehouse is based on the common theme domain/theme perspective within the company, and based on the consistency dimension, it does data integration analysis and related construction across various businesses, including traffic data warehouse, content data warehouse, user data warehouse, etc.

The application layer includes data application products such as business reports, content analysis, and user operations. According to specific scenarios and needs, data are obtained from business marts and theme data warehouses.

After talking about the overall structure of Data Warehouse 2.0, let's clarify and summarize the positioning of each component of Data Warehouse.

1. The unified data warehouse provides comprehensive and universal data at the bottom and serves as a specification maker and constrainer, providing data and bottom models for the upper layer, which is the basis for the construction of the data warehouse.

2. The business mart is based on the data and model of the unified data warehouse, combined with the purpose of business data analysis, and builds a data set that meets each business according to the characteristics of the business.

3. The theme data warehouse is also based on the data and model of the unified data warehouse, which is oriented to the analysis of different entities and builds data collections in fields such as users and content.

When building business marts and theme data warehouses, try to maintain the principles of high cohesion and low coupling to prevent data flow from being too complicated, too deep, and dependent on confusion between different levels, which will lead to increasing maintenance costs and development costs in the later period .

Data warehouse construction

The following describes how to build a data warehouse platform based on the data warehouse 2.0 architecture and a consistent dimension/index system, and standardize the data warehouse modeling process.

The architecture of the data warehouse platform is as shown in the figure above. The bottom layer is the basic service. The final physical table generated may include Hive, MySQL, Kafka, ClickHouse, etc. The next level is work order management, authority management, and resource management, as auxiliary functions of the platform. Work order management is used to approve the creation/modification of dimensions/indicators. We have set up a data warehouse committee to control the formulation of dimensions/indices. At the same time, we will make corresponding adjustments to the construction of data warehouses based on the actual situation. Permission management manages the operation permissions of different users, and displays corresponding entries according to the developer permissions.

Data warehouse management and data model modules are the core of the entire data warehouse platform. Data warehouse management is responsible for the abstract and integrated management of the atomic components needed in the construction of the data warehouse, including business management, theme management, dimension management, index management, etc. Only when these basic links are prepared, can the subsequent data warehouse modeling work be carried out. We divide data modeling into three links, namely business modeling, data modeling, and physical modeling. We will elaborate on each link later.

The data warehouse platform provides a unified API, including dimensions and indicators, for docking with other peripheral systems and top-level product applications. For example, in the reporting system, the indicator definition, calculation caliber, and description are displayed in a unified manner to ensure that the indicators convey accurate information during the data production-use flow process. The data warehouse platform uniformly pushes the metadata information generated during the data modeling process to the metadata center for data discovery and data understanding for data graph services.

Before introducing the data modeling process, let's talk about the construction of the consistency dimension and index system. At present, the construction of data warehouse is mainly based on dimensional modeling theory, so the consistent dimension is the cornerstone of the bottom layer. In the data warehouse platform, we divide dimensions into three types, namely ordinary dimensions, enumerated dimensions, and virtual dimensions.

Ordinary dimension, the most common dimension type, usually there is a corresponding dimension table, each dimension is composed of a primary key and multiple dimension attributes, such as user dimension, content dimension, etc.; enumerated dimension is a kind of ordinary dimension Special cases, also known as dictionary dimensions, enumerate and standardized enumeration values ​​to represent dimension objects, exist in the form of Key-Value, for example: whether XX, 0 means no, 1 means yes; virtual dimensions are not carried by specific business entities, there is no Solidify the dimensional objects defined by the data range logic, such as random numbers, session IDs, etc.

When creating a dimension, you need to add some label attributes, such as the English name, Chinese name, description, versatility of the dimension, and the entity to which it belongs (for example: time, space, application, etc.). The dimension is divided into business through "versatility" Dimensions and general dimensions: If a dimension can only be used by one business, it is defined as a business dimension, which means that its scope of application can only be under one business, and other businesses are not available; it will be used by two or more businesses , Defined as a common dimension. In fact, the two types will change over time. A dimension is only used by one business at the beginning, but as the business develops, it will be used by multiple businesses in the later stage. When the business dimension is upgraded to a common dimension, construct The common dimension mirroring of the business dimension.

A dimension will contain several dimension attributes. Each dimension attribute contains English name, Chinese name, data type, description, etc. At the same time, it is necessary to define the final field name of the dimension attribute in the physical table to achieve the same name in the data warehouse. Righteousness and global uniqueness.

Regarding the dimensional construction theory, I will not expand it in detail here. Interested students can search for relevant articles and books online for understanding and learning.

Index system

The indicator system consists of indicator metadata (atomic indicator metadata and composite indicator metadata), modifiers, time periods, and statistical indicators (atomic indicators and composite indicators).

Indicator metadata: It is an abstraction of statistical indicators. All indicators must be derived from a certain indicator metadata.

Atomic indicator metadata/measurement: It is the smallest unit that cannot be disassembled in the business process. It is generally composed of action + measurement. At the same time, atomic indicator metadata is equivalent to measurement, which is a unit of measurement that describes a fact; metadata is a pair of pointers An abstraction of the facts of a certain business process, without modifiers and time periods, cannot represent specific statistical significance.

Composite indicator metadata: It is obtained through calculation and processing of multiple atomic indicator metadata or composite indicator metadata. It is an abstraction of composite indicators. If you need to use it as a statistical indicator, you need to add the corresponding time period and modifiers, such as click Rate, percentage, conversion rate, etc.

Modifiers: Modifiers can be understood as the environment in which statistical indicators exist, and are used to clarify the specific meaning of statistical indicators and the description of detailed calibers. Each statistical indicator can have one or more modifiers, and the dimensional attributes can be mutually transformed , Such as: Beijing users, movie channel pages, etc.

Time period: The time period is used to describe the time range of statistical indicators. It can be considered as a special modifier. In statistical indicators, this special modifier must be clear, such as the current day, the last 30 days, etc.

Statistical indicators: Divided into atomic indicators and composite indicators, they are the instantiation of indicator metadata and represent specific fact measurement indicators.

Atomic indicator: Atomic indicator = one meta indicator + multiple modifiers (optional) + time period, which is the statistical significance of describing a business process, such as the number of plays in the last day.

Composite indicator: It is calculated by multiple atomic indicators or composite indicators, and describes the relationship between multiple business processes, such as the completion rate of playback in the last day, the number of starts per capita in the last 30 days, etc.

The indicator system is strictly controlled in the data warehouse platform. The construction of indicators generally needs to be systematically and standardizedly summarized and refined through the cooperation of development, product, analyst, and business parties, combined with actual scenario requirements.

Modeling process

Unified data warehouse modeling is the foundation of business layer modeling and needs to cover as many business processes and dimensions as possible, including three stages: business modeling, data modeling, and physical modeling.

Business modeling is based on the existing information of the business, combined with the modeling students’ understanding of the business, to sort out the business. At this time, it will not face specific analysis details. The scope of confirmation is mainly the relationship between the business domain, business process, and entities. Output business bus matrix. The purpose of business modeling is to decompose business requirements and transform them into data understanding, including specific processes: dividing business domains, confirming business processes, designing event facts, confirming related entities, correlating events, and building a business bus matrix.

▇ Business domain division. A business domain is a collection of business processes. It is a coarse-grained division of all aspects of the business, and related business processes are aggregated under one business domain, such as the playback domain.

▇ Confirm the business process. The business process is an atomic behavior in the business and cannot be disassembled. We need to confirm what business processes are in the business modeling process and clarify the business domain to which the business process belongs. A business process can only belong to A business domain.

▇ Design event facts.

▇ Confirm related entities, confirm the scope of entities involved in a business process from a coarser granularity, prevent omission of analysis perspectives, and provide connection nodes for associated event entities.

▇ Associating event facts, unified data warehouse modeling needs to cover all the existing event fact fields and make more dimensional associations through entities.

▇ Construct a business bus matrix. The horizontal and vertical coordinates are the business domain and business process describing the fact itself, as well as the dimensions and entities describing the fact environment.

The data modeling stage is mainly to refine the business bus matrix, complete the logical conversion of business relationships to data relationships, and supplement related dimensions, and output star (snow) models.

▇ Confirm business, generally do not cross business, model for a single business.

▇ Confirm business process, which can be oriented to single or multiple business processes.

▇ Confirm the dimensions, the dimensions included in the business process.

▇ Confirmation measures, measures involved in the business process.

▇ Degenerate dimensional attributes, in order to be more efficient for downstream use, some general dimensional attributes are degenerated to the detail layer model, and join operations with dimension tables are minimized to improve efficiency.

▇ Build a star model to guide subsequent development operations.

Physical modeling is actually the materialization process of the data model. The materialization process will have subtle differences in the process according to different engines, and finally materialize the data model into Hive physical tables/views, even Kafka Topic with Schema structure, below Take the Hive physical table as an example to describe the whole process.

▇ Confirm the data model and select the data model that needs to be materialized.

▇ Confirm the table name and supplement and complete the table name information according to the data warehouse specifications, such as calculation period, table type, business information, etc.

▇ Confirm the description/instructions, supplement the Chinese description of the information in the table and use precautions.

▇ Confirm the partition field, such as day level and hour level.

▇ Confirm the life cycle and set the time range of data retention according to the importance of the data, such as 30 days, 1 year, etc.

▇ Generate a physical table, and enter the business metadata information of the table into the metadata center, which corresponds exactly to the model. The table name, field name, field type and other information are standardized and unified.

As mentioned before, the unified data warehouse is the basic source of the underlying model and data. The business mart/themed data warehouse is modeled based on the existing underlying model, which mainly includes data modeling and physical modeling (of course, you can use unified data The business bus matrix output in the warehouse business modeling stage has a better understanding of business).

The goal of business layer data modeling is to output the theme data star model, select related business processes according to different themes and analysis scenarios, and use reasonable modeling methods for data modeling. The main processes include: confirming themes and selecting business Process, confirm the granularity, confirm the dimensions, confirm the statistical indicators, and finally output the star model.

▇ Confirm the theme according to specific analysis needs.

▇ Confirm the business and business process to be analyzed.

▇ Confirm the unified data warehouse model, the system automatically recommends related models, selects the models that meet the conditions, and performs subsequent modeling work on this basis.

▇ Confirm the granularity, the same granularity model can be combined with indicators.

▇ Confirm the dimensions and select the dimensions that need to be drilled down in the subsequent analysis. The selection process is carried out within the scope of the business process and cannot exceed the scope that the dimensions can be associated.

▇ Confirm statistical indicators and select statistical indicators derived from business process-related metrics (atomic indicator metadata).

▇ Build a star model.

The physical modeling process is the same as before, so the introduction will not be repeated.

The following figure is an example of the star model produced in the data modeling stage. In the model diagram, the associated business information and data logic are clearly expressed to assist the subsequent data development work.

Data Atlas

Data Atlas is centered on metadata, providing complete and standard metadata query capabilities, reducing the cost of data discovery and data understanding, building a core data asset catalog, and improving data usage efficiency.

The following figure is the architecture diagram of the metadata center. The open source framework Atlas is used at the bottom layer and targeted secondary development is carried out. JanusGraph stores metadata information and data blood relationship, and ES provides a unified metadata search service.

The metadata center is mainly responsible for the collection and management of metadata and the construction of data bloodlines. Metadata can be divided into technical metadata + business metadata, which are automatically synchronized and collected through different platforms or different underlying basic service components. For example, through the HiveHook method, the technical metadata information of the Hive table is automatically collected, and the data warehouse platform is responsible Synchronize business metadata information in the modeling process to the metadata center. Correspondingly, the construction of data blood relationship is realized through two parts. The first part is through the HiveHook and SparkHook mechanisms. When the SQL/computing task is completed, the input table and output table are automatically parsed, and the workflow task information when the SQL/computing task is submitted is intercepted. In addition, in the big data development platform, the company has a set of self-developed data integration products (BabelX), and also implements the Hook mechanism to support the integration of data. The second part is achieved by regularly pulling input/output relationships from peripheral system services. We have opened up the full link bloodline from the Pingback-BI report, and can trace the source/end of the entire bloodline link up/down through any node.

In the past, in the development or use process, if data is needed, most of it is through offline communication. Finding products and development. After several rounds of unremitting efforts, this method will consume a lot of communication costs and Labor costs. When you find the data, you are faced with how to understand the data and how to use the data correctly. Even with a good document specification, it is inevitable that there will be untimely updates, inaccurate information expression, and even certain information that cannot be expressed in the document. If the life is not good or the information is not transmitted accurately, the data found does not match the expectations, and it is necessary to communicate and search again. Based on the metadata center, we have built a data map service for data discovery and data understanding. When building a data map, we have a clear and clear goal: to create an efficient environment, support rapid "finding data", intuitive understanding and Use data to realize the "data use" requirements such as guiding data development and improving development efficiency.

"Finding data", that is, data discovery. It needs to provide keyword search capabilities based on "a certain consensus", such as dimension combination and indicator combination, and automatically present data matching the target dimension + indicator matrix, combined with sorting and secondary screening Function, gradually narrow the target range, and carry out the final positioning. If you search for non-standardized information, such as descriptions, the possible phenomenon is misleading due to inaccuracy/ambiguity of expression. At the same time, a structured catalog or guided query function is also needed to be able to see the panoramic display of the business and what data is under the business in a simple and fast way. For example, the data warehouse map provides a list model and a map model: the list mode displays data tables, dimensions, indicators in a catalog, and displays information such as the business and themes of the data tables, and browses and locates after simple screening; the map mode provides a topological map for all Graphical summary of business and business processes/topics, showing a panoramic view of business and data models, filtering layer by layer according to the wizard, and finally positioning the target data.

"Using data" refers to how to efficiently understand the business information reflected in the data and use it correctly after finding the data. Therefore, a metadata knowledge graph is constructed for data usage scenarios, and related technical metadata and business metadata are obtained and displayed in a friendly classification. For example, basic information includes the project to which the data belongs, the person in charge, and the authority approval personnel; business information includes Data belongs to business, business domain, subject domain, etc.; data warehouse label includes Chinese name, description, subject model, dimension/index, etc.; data asset information includes asset level, SLA, quality score, etc.

Taking the Hive table as an example, all relevant information of Hive is collected in the metadata center and classified according to the labeling method, so that data users can quickly find the data through search, catalog browsing, etc., and understand the data in an all-round way. The business meaning expressed is easy to use.

Data Bloodline

As mentioned earlier, we have constructed a full-link bloodline from the Pingback-BI report, and injected input/output information and corresponding workflow task information into each corresponding link. The value of data bloodline will not be explained too much. Through the blood relationship of the data, impact assessment, troubleshooting, link analysis, etc. can be performed. For example, when a table in the data link has a quality problem that needs to be repaired, backtracking data or a table undergoes a major upgrade, and downstream migration is required, all downstream tables, corresponding workflow tasks, and Owners can be exported through data bloodlines. It can help data producers or managers to quickly coordinate downstream change operations. New employees can have a clearer understanding of the entire data warehouse construction process through blood relationship. For this, the data blood relationship tool provides link pruning and filtering functions. When there is too much data downstream, it is convenient and clear by pruning and filtering according to certain conditions. To view a branch link. When the BI report reaches the offline standard, the computing task of the corresponding link can be found through the blood relationship of the whole link to go offline, which solves the problem of easy going online and difficult offline, improving resource utilization and saving costs.

Another application based on the blood relationship of the full link data is asset grading. In the data governance link, we must first perform asset grading and marking on the data, so as to specify corresponding governance strategies according to different levels. The higher the level, the higher the level of SLA and data. The higher the quality requirements. Asset grading and marking are realized through automation + manual marking. At the end of the blood line of the data, that is, the data application, taking BI reports as an example, each report will have an important level label. Combining the blood line of the data, it can be traced upward to realize the data asset Automated marking work. Because not all data will generate reports, it can only be manually marked. In the follow-up work, we hope to close the mouth through the data service, that is, to classify the data API, integrate the data API into the blood relationship of the full link, and open up the blood relationship of the data API-data application, so as to more fully automatically cover the data assets. Grade label.

Summary and outlook

Finally, briefly summarize and look forward to introduce some of our current ongoing and follow-up work directions. In the subsequent construction of the data center, we hope to be intelligent, automated, service-oriented, and model-oriented.

Intelligent

Take the construction of a data quality platform as an example. In the past, the usual practice was that the platform provides custom quality rule verification and audit capabilities. Developers manually set rules and thresholds. When the number of tables increases, it will consume a lot of labor costs. Even with expert experience, it is impossible to make a dynamic threshold scheme that includes holiday factors. One of the solutions we are currently trying is to collect the core fields of the data warehouse and formulate general data quality rules (such as the number of table rows, deduplication number, null value rate, etc.), and automatically collect historical data for sample collection and training. Trend forecasts, setting dynamic thresholds, and realizing automatic coverage of data quality. On the one hand, it saves manpower. On the other hand, holiday factors can be considered in the process of intelligent prediction, which improves the accuracy of data quality monitoring and reduces false alarms due to holidays.

automation

Based on the data warehouse platform, the data modeling work is standardized and streamlined. In the follow-up work, some common models and processes are accumulated and precipitated to realize automatic code generation.

Servicing

In the past, when data is connected to data applications/businesses, most of them directly access the underlying data source, which results in a variety of data access methods, low access efficiency, data and interfaces cannot be shared, and underlying data changes, which affect data Application and other issues. In the process of building the existing data center, we build a unified data service. The data uses API as a unified interaction with the data application/business center to solve the above problems.

Modeling

The data model shields the underlying physical realization, while expounding the business information and data relationship logic in a standard and standardized way. In the future, we hope that users are not facing various physical tables, but models. Users only need to select the corresponding business, filter the required dimensions and indicators, and then they can be automatically routed to the model. Then, based on the dependency relationship between the model and the underlying physical table, combined with the ability of federated query, the query task is automatically generated to access the most suitable physical Table, output data to the end user.

Maybe you still want to watch

The Practice of Low Code in the Data Synchronization Platform of Iqiyi Magpie Bridge

IQiyi's paper was accepted by ACM MM conference, open the cartoon character data set to start a new generation of cartoon intelligent recognition

Scan the QR code below, more exciting content will accompany you!


Guess you like

Origin blog.csdn.net/weixin_38753262/article/details/109685399