Mango TV's real-time data warehouse construction practice based on Flink

Company profile: Mango TV, as an Internet video platform under Hunan Radio and Television, under the strategic guidance of "one cloud, multiple screens, multiple integration", cultivates core competitiveness through self-made content, from exclusive broadcasting and uniqueness to originality, and through market-oriented operation Completed the A round and B round of financing, and successfully realized the asset restructuring in June 2018, becoming the first domestic A-share state-owned video platform.

Tips: Click "Read the original text" to receive 5000CU* hours of Flink cloud resources for free

01

Mango TV real-time data warehouse construction history

The construction of Mango TV's real-time data warehouse is divided into three phases. The first phase is from 2014 to 2019. The technology selection adopts Storm/Flink Java+Spark SQL. The first half of 20-22 is the second stage, and the technology selection adopts Flink SQL+Spark SQL. The second half of 22-now is the third stage, and the technology selection adopts Flink SQL+StarRocks. Each upgrade is iterated on the original basis, in order to have more comprehensive functions, faster speed, and better meet the needs of the business side. Next, we will introduce them one by one.

The first generation is based on Storm/Flink Java+Spark SQL

The real-time data processing of Mango TV started very early. Storm was used at the beginning. In 2018, Flink was born. The advantages of Flink's state and stream processing are impressive, and the popularity of the open source community and the success of major manufacturers have made it impossible to refuse, so Flink was used to build a real-time data warehouse, but at that time it was mainly to meet the needs of the business. The chimney-style development will be carried out based on the needs of the parties. The basic process is to connect the upstream Kafka data, use Flink Java to process relevant business logic, and then output the data to the object storage. Then use Spark SQL to perform secondary processing such as statistics on the data, and then deliver it to customers. The advantage of this stage is that it takes advantage of the strengths of Flink to make the data more real-time from the source to the terminal, which meets the timeliness and business needs of the business side for data. The disadvantage is that a function is made when there is a need, and there is no real-time data warehouse construction and precipitation.

The second generation is based on Flink SQL+Spark SQL

Based on the technical accumulation and problems discovered in the previous stage, a new plan for building a real-time data warehouse is proposed. At this time, the Flink SQL function has been preliminarily perfected, which can meet the needs of all aspects of building a data warehouse. Compared with Flink Java, SQL can also reduce development, maintenance and other aspects of costs. So Flink SQL was chosen to build a real-time data warehouse. At this stage, the layered architecture design of the real-time data warehouse is carried out, which will be explained in detail later. The basic process is to connect upstream Kafka data to format and output to Kafka, the lower layer receives Kafka data for field processing, garbage data filtering and other operations and then outputs to Kafka, and the last layer receives Kafka data for dimension expansion, and then writes the data to the object in storage. After Spark SQL reads the data in the object storage for statistics and other processing, it is delivered to customers for use. The advantage of this stage is that the layered architecture design of the data warehouse is realized, the data definition of each layer is standardized, the data decoupling of each layer is realized, the chimney-style development is avoided, and the problems of repeated development are solved, and the real-time data warehouse is gradually moving towards Mature. The disadvantage is that it is not flexible enough to use Spark SQL for subsequent statistics and summary. It is necessary to design indicators in advance, and when faced with the changing needs of customers, it is often impossible to respond in a timely manner.

The third generation is based on Flink SQL+StarRocks

With the gradual deepening of the construction of real-time data warehouses, Spark SQL is not flexible enough, and the disadvantages of insufficient processing speed are becoming more and more prominent. At this time, StarRocks has come into our sight. Its MPP architecture, vectorization engine, multi-table join and other features show advantages in performance and ease of use, all of which make up for the shortcomings of Spark SQL in this area. . So after research, it was decided to replace Spark SQL with StarRocks in the real-time data warehouse. At this stage, the hierarchical structure of the real-time data warehouse built with Flink SQL has not changed, and the downstream functions related to statistical analysis with Spark SQL have been gradually replaced with StarRocks. Based on the advantages of StarRocks and the pain points encountered in building a real-time data warehouse, we did not copy the previous Spark SQL model, but chose a new model. Ad hoc queries are implemented using StarRocks. Previously, Spark SQL was used to collect statistics and summarize the data, and then write the final result data into the object storage. But now StarRocks is directly used to summarize the detailed data and display it on the front-end page. The advantage of doing this is that it can meet the needs of the business side faster and more flexibly, reduce the development workload, and reduce the time for testing and going online. The excellent performance of StarRocks keeps the speed of ad hoc queries from slowing down, and the functions are more powerful, more flexible, and the delivery speed is faster.

02

Introduction to self-developed Flink real-time computing scheduling platform

existing pain points

  • Native task commands are complex, debugging is troublesome, and development costs are relatively high.

  • Connectors, UDFs, Jar task packages, etc. cannot be managed, debugging is complicated, and dependency conflicts are often encountered.

  • It is impossible to achieve unified monitoring and alarm and authority management on resources.

  • The development of SQL tasks is complicated, and there is no easy-to-use editor and code management and storage platform.

  • Basic tables, dimension tables, and Catalog have no recording and visualization platform.

  • Multi-version and cross-cloud tasks cannot be well managed.

  • Without a good log management mechanism, it is impossible to quickly locate problems in the production environment.

Platform architecture design

Real-time Flink scheduling platform architecture diagram:

2538a34e41843a444ec6ecb6d5290b54.png

The platform is mainly divided into three parts:

1. The Phoenix Web module is mainly responsible for facing users.

  • Cluster deployment and task submission.

  • Management of internal business authority of the company.

  • Support Catalog and multi-source information management.

  • Three parties such as UDF and connector depend on Jar package management.

  • Multi-type monitoring alarm and log management.

  • SQL visual editing and validation and multi-version storage.

2. Both Flink SQL Gateway and Flink Jar Gateway are modified and customized services based on the open source version. They support SQL parsing and verification in line with business scenarios and the submission of Jar tasks. They support local mode, Yarn-per-job mode and Application mode. Automatic Savepoint is also supported.

  • Perform SQL parsing and verification.

  • Three-party dependencies required for loading SQL and Jar tasks.

  • The SQL task connects to the Catalog store for association and mapping.

  • Automatic management and recovery of Checkpoint and Savepoint.

  • Injection of Jar type task startup parameters.

  • Adaptive runtime configuration.

  • Multiple types of submission methods are adapted.

3. The hybrid multi-cloud module is mainly responsible for the distribution of startup tasks and information management between clouds.

03

 Flink SQL real-time data warehouse layering practice

When using Flink SQL to build a real-time data warehouse, the first problem is how to solve the layered architecture of the data warehouse. There are many excellent experiences in the industry for reference. At the same time, based on our situation, the following data warehouse architecture was finally adopted:

69122f0f6043958e782e319b046d149b.png

ODS layer: the original log layer. In this layer, data sources such as upstream Binlog logs, user behavior logs, and external data are synchronized to the data warehouse. Data from various data sources and formats are parsed and formatted through a unified UDF function, and finally Output formatted JSON data.

DW layer: data detail layer, where error data filtering, field escaping, unified field names, etc. are mainly processed, and the output data can meet the use of daily basic analysis.

DM layer: the data model layer, where dimension expansion is performed to supplement relevant public information. Then it is divided into domains according to business, and the output data has richer dimensions, which can meet the data usage requirements of advanced analysis.

ST layer: data application layer, which is summarized according to business, function and other dimensions, and handed over to the front-end page for display, and the output data can be delivered to Web, App, applet and other functions.

04

Problems encountered in the production process of Flink SQL real-time data warehouse

When building a real-time data warehouse, I encountered many problems. Here are a few typical problems to explain the solution ideas:

1. Multi-table association, this is a very important and commonly used function when building a data warehouse. In the early stage of building a real-time data warehouse using Flink SQL, we were really confused by Flink's dazzling array of Join types, especially When it comes to multi-table association, some dimension table data is in Hive, some dimension table data is in MySQL, and some dimension table data is even in other OLAP. What kind of association method to choose was a big problem at that time. After many attempts, under the balance of performance, functionality and other aspects, the following rules are summarized:

  • The flow table is associated with the dimension table (small data volume), using Lookup Join. When the dimension table data volume is less than 100,000, the Hive table can be used as the dimension table, because most of the dimension table data in the offline data warehouse is in Hive. It can be reused directly, saving the extra work of data import and export, and there is no bottleneck in performance. After the dimension table is updated every hour, Flink SQL can also read the latest data.

  • The flow table is associated with dimension tables (large data volume). Lookup Join is used. When the dimension table data volume is below 100,000-10 million, MySQL tables can be used as dimension tables. At this time, Hive dimension tables cannot meet performance requirements. The data can be exported to MySQL, and the cache mechanism can also be used to meet the requirements.

  • The flow table is associated with the flow table, using Interval Join, and the time field of the two flow tables is used to control the association range. This association method is currently used more often. The way of use is also closer to offline.

2. Complex table processing. In some complex scenarios of data cleaning, when associating dimension tables, the data in the dimension tables must undergo one or even multiple layers of processing before they can be used. In such scenarios, offline data warehouses can be directly Write multi-level subqueries during Join to complete in one step. However, it is not supported in Flink SQL, and it is rejected in the underlying mechanism. After many attempts and struggles, the final solution is to preprocess the dimension table data in Hive, and the real-time data warehouse uses the preprocessed dimension table data. However, this is only a transitional solution. At present, we have learned from the community that there will be a new mechanism in the future to realize the association of dimension tables after performing arbitrary complex calculations on dimension tables. I have to say that the update of the Flink community is still very fast.

3. The State is too large. When two flow tables are associated or summary statistics are performed, Flink's mechanism is to cache the data in the State. This causes the State to be too large, resulting in frequent GC, and then the task fails. In view of this situation, after studying the memory mechanism of Flink, the solution is as follows:

  • Shorten the time range, and appropriately reduce the time range of the two streams during association according to business needs.

  • Adjust the size of Managed Memory, you can adjust the proportion of Managed Memory, and appropriately reduce the use of other memory.

  • Set the State's TTL to avoid caching too much data.

4. Checkpoint expired before completing exceptions frequently appear in tasks. In the actual production environment, it is found that some tasks will frequently report this error. This error means that Checkpoint cannot be successfully completed, because Flink's Checkpoint has a Barrier mechanism to ensure that the data is ExactlyOnce accurate. Consistency, if a batch of data cannot be processed, Checkpoint cannot be completed. Those who are interested can go to find out. There are many reasons for this error, and different problems have different answers. Next, the scenarios and solutions are listed below:

  • The checkpoint timeout period is too short. This is a relatively common and easy to solve situation. The reason is that the timeout period of Checkpoint is set too short, causing the timeout to be reported before the Checkpoint is completed. The solution is to set it longer. We generally set it to 6 seconds to 2 minutes according to the task type.

  • Tasks have back pressure, which is also very common. There are multiple operations in a task, and one of the operations takes a long time to affect the execution of the entire task. It will also affect the completion of Checkpoint, which involves, if you are interested, you can check it. The solution is to find the slow execution Task from WebUi, analyze the specific problem in detail, and solve it.

  • Insufficient memory, let me talk about the background first, we generally use rocksdb statebackend in the production environment, and the full checkpoint will be reserved by default. In this case, when encountering tasks that use heap statebackend, such as association and group statistics, the intermediate results of the calculation will be cached in State, and the memory of State is by default 40% of the total memory. In this calculation, it will Not enough, which leads to frequent GC and also affects the execution of Checkpoint. The solution is as follows:

    • Increase the memory of TaskManager. After the memory of TaskManager is increased, other memory areas will be increased accordingly.

    • To increase the memory ratio of Managed Memory is to set the parameter taskmanager.memory.managed.fraction, which can be adjusted up to 90% in actual production according to the actual situation. This method only increases ManagedMemory one piece, if the memory resources are not very abundant, you can use this method.

    • Use incremental checkpoint instead, adjust the TTL time of State according to the actual situation, and enable incremental checkpoint. Even without adjusting the memory size, the problem can be solved.

5. When using the if function in Flink SQL, an accidental discovery, when returning String, it will return according to the maximum length. What does it mean? For example, if(condition, stringA, stringB), the length of stringA is 10, and the length of stringB is 2. If condition = false, when stringB is returned, the length of stringB will be filled up to 10, and spaces will be given if it is not enough. This is something to watch out for. But later learned that this phenomenon has been fixed in version 1.16.3, and we are using 1.15, so if you encounter it, you can replace it with CaseWhen or upgrade the Flink version to 1.16.3 and above to solve it.

05

StarRocks selection background and problems

In the previous framework, we used the Flink stream processing engine to complete the cleaning of the original log, data widening and light aggregation, and then landed on the distributed file system or object storage, and scheduled batches at the five-minute level through offline Spark SQL. After processing, the results will be queried through engines such as Presto. This kind of architecture gradually reveals many problems in the production environment.

For example:

  • There is a problem of repeated calculations. The original data will be cleaned repeatedly in different tasks, and some associations that require multiple original data will also be cleaned repeatedly. This wastes a lot of computing resources, and the reusability of code and data flow is poor.

  • In order to satisfy the historical cumulative value of offline batch processing and the calculation index of the current 5-minute window, it is very likely that the calculation of the index cannot be completed within 5 minutes during the peak traffic period and when the index of the day is accumulated until the evening. There is a great risk of timeout, and the business will Feedback latency for real-time metrics.

  • Due to the fact that offline Spark batch processing is a little weak in the case of multi-dimensional combined analysis and real-time performance is required. The online business has spawned many real-time scenarios. On the other hand, the refinement of operations and the civilianization of analysis have also spawned multi-dimensional analysis requirements. These scenarios require extremely fine-grained and rich-dimensional underlying data. These two parts The superposition of these results gave birth to a real-time multi-dimensional analysis scene. At this time, we need to continuously increase dimension combinations, increase result fields, and increase computing resources to meet the above scenarios, but it is still a little weak.

  • Today, data timeliness is increasing day by day. In many scenarios, the data timeliness requires seconds and milliseconds. The previous 5-minute method cannot meet business needs.

  • In the previous real-time tasks, it is often necessary to join streams and streams in the Flink memory. These all need to be done in the Flink task memory. Because the data arrival time of multiple upstream data streams is inconsistent, it is difficult to design a suitable window for computing. Wide data in the engine, when using Flink Interval Join, the time interval between multiple streams is too long, the state data will be very large, and the state calculation such as mapState is enabled, which is too customized.

  • For Flink cleaning or calculation results, multiple storage media may be required. For detailed data, we may store them in a distributed file system or object storage. At this time, it is Flink+HDFS. For business update stream data, it may be Flink CDC+hbase (cassandra or other key-value databases), Flink+MySQL(redis) may be used to generate retracement flow data for Flink, Flink+elasticsearch may be used for risk control data or traditional fine-grained version viewing, and large-scale log data indicators The analysis may be Flink+clickhouse, which is difficult to unify, a lot of resources are consumed, and maintenance costs are also high.

  • When there are large-scale activities or large-scale programs online, the amount of real-time data increases dramatically. In the case of real-time mass writing, the writing delay is large, the writing efficiency is not high, and the data backlogs.

Overall analysis, the early architecture has some problems.

  • The data sources are diverse and the maintenance cost is relatively high.

  • Insufficient performance, large write delay, data backlog in big promotion scenarios, and poor interactive query experience.

  • Each data source is fragmented and cannot be correlated and queried, forming many data islands. Then, from the perspective of development, each engine needs to invest in corresponding learning and development costs, and the program complexity is relatively high.

  • High real-time requirements, fast development efficiency, and strong reusability of code or data.

  • Real-time task development does not have the same set of standards, and each works independently.

To this end, we made a simple performance comparison in the test environment, the details are as follows:

Comparison environment StarRocks: 4 *16C*128G Presto: 22*32C*256G (non-exclusive)

Data volume: event table (10 billion data in total, 10 million de-reuses per day)

test case

Soon(s)

StarRocks(s)

Single table aggregation test

13.1

5

association test

19

8

retain

24

15

window function

16

8

funnel

3.5

3.2

Multi-table association

36

19

This test uses 4 BE servers with 16C128G memory, and the test conclusion can basically meet the query requirements of tens of billions of data. The test results show that the performance of StarRocks is significantly better than that of Presto in the case of a large difference in resources, and the average efficiency is increased by 2-3 times.

06

Real-time analysis of data warehouse based on Flink SQL+StarRocks

Based on the Flink SQL data warehouse layering system that has been built, it has been upgraded from StarRocks2.5X version to StarRocks3.0X storage-computing separation version and has been put into production environment on a large scale.

The architecture diagram of the integration of real-time and offline lake warehouses:

ff42aea29817a208d3ed57b6e904ae66.png


detailed model

The most common log data in a big data production environment is characterized by a large amount of data, multi-dimensional flexible and complex calculations, many calculation indicators, strong real-time performance, high-performance queries at the second level, simple and stable real-time stream writing, and large tables. Join, high cardinality deduplication.

These elements can be satisfied for Flink SQL+StarRocks. First, use Flink SQL on the real-time platform to quickly clean and widen the real-time streaming log data. At the same time, StarRocks provides the Flink-Connector-StarRocks connector out of the box, and supports ExactlyOnce and transactions Support, low-latency and fast import through Stream Load.

For example:

81bfcde7959b587155161a0b26eca8c4.png

Through the efficient and simple Flink SQL table building mode, batches of millions of data are written, and the speed is fast. At the same time, the number of multi-dimensional user accesses for a single table in the production environment with more than one billion data, and user deduplication data can reach the second level.

primary key model

In OLAP data warehouses, variable data is generally frowned upon.

For the data change method in the data warehouse:

  • Method 1: Some OLAP data warehouses provide the update function of the Merge on Read model to complete data changes, such as (clickhouse).

  • Method 2: Simply put, it is to create a new partition table, delete the data in the old partition table, and then flash it in batches.

Insert the modified data in the new partition, and complete the data change through partition exchange.

By batch flashing, the table will be rebuilt, partition data will be deleted, and the process of flashing data is complicated and may lead to errors.

The Merge on Read mode is simple and efficient when writing, but it consumes a lot of resources for version merging when reading. At the same time, due to the existence of the merge operator, the predicate cannot be pushed down and the index cannot be used, which seriously affects the query performance. . StarRocks provides a primary key model based on the Delete and Insert mode, which avoids the problem that operators cannot be pushed down due to version merging. The primary key model is suitable for scenarios where data needs to be updated in real time. It can better solve row-level update operations and support million-level TPS. It is especially suitable for scenarios where MySQL or other business libraries are synchronized to StarRocks.

Moreover, the perfect combination of Flink CDC and StarRocks can realize end-to-end full + incremental real-time synchronization from the business database to the OLAP data warehouse. One task can solve all batch and real-time problems with high efficiency and stability. At the same time, the primary key model can also solve the problem of retracement stream output in Flink, supports update by condition, and supports update by column. These are many advantages that traditional OLAP databases do not have at the same time.

06625edead562474cfe0a1da1ad43292.png

The Flink CDC+StarRocks model can solve many problems in the production environment. The joint solution of StarRocks and Flink to build a real-time data analysis system will subvert some existing restrictions to a certain extent, form a new paradigm of real-time data analysis, and accelerate integration Real-time log data and business data can also solve the problem of batch extraction of traditional offline data, realize the unification of offline and real-time data, and speed up the process of stream-batch integration.

aggregate model

There is another scenario in the real-time data warehouse. We don’t care much about the original detailed data. Most of them are summary queries, such as SUM, MAX, MIN and other types of queries. The old data is not updated frequently, and only new data will be added. This When you can consider using the aggregation model. When creating a table, it supports defining sort keys and index columns, and specifying aggregation functions for index columns. When multiple pieces of data have the same sort key, the index column will be aggregated. When analyzing statistics and summarizing data, the aggregation model can reduce the data that needs to be processed during query and improve query efficiency.

In the past, we may put these operations in Flink for statistics. The state data will exist in memory, which will cause the state data to continue to grow and consume a lot of resources. Change Flink's simple statistics to Flink SQL+StarRocks aggregation model, Flink Here you only need to clean the detailed data and import it to StarRocks, which is very efficient and stable.

In actual production, we mainly use it to count user viewing time, clicks, order statistics, etc.

85a90a03b8aaec0004402909b5ce5c17.png


materialized view

Applications in a data warehouse environment often perform complex queries against many large tables, often involving the association and aggregation of billions of rows of data across multiple tables. To realize this method of real-time multi-table association and query results, we may put this content in the Flink real-time data warehouse for processing, layered processing of association, merging, statistics and other tasks, and finally output the result layer data, Processing such queries usually consumes a lot of system resources and time, resulting in extremely high query costs.

Now you can consider using the new idea of ​​Flink SQL+StarRocks to deal with this large-scale hierarchical computing problem, so that Flink SQL only needs to handle some simple cleaning tasks here, and push down a large number of repeated calculation logic to StarRocks for execution. The real-time stream lands in real time, and the modeling method of multi-level materialized view can be established in StarRocks. The materialized view of StarRocks not only supports the association between internal tables and internal tables, but also supports the association between internal tables and external tables. For example, your data is in MySQL, Hudi, Hive etc. can be accelerated through the StarRocks materialized view, and set regular refresh rules, so as to avoid manual scheduling of associated tasks. One of the biggest features is the materialized view we have established. When there is a new query to query the base table that has built the materialized view, the system automatically judges whether the pre-calculated results in the materialized view can be reused to process the query. If it can be reused, the system will directly read the precalculated results from the relevant materialized views to avoid repeated calculations that consume system resources and time. The higher the query frequency or the more complex the query statement, the more obvious the performance gain will be.

fffb076a686848646a81365564c99f08.png

Real-time is the future, and StarRocks is gradually realizing this capability. The joint solution of StarRocks and Flink to build a real-time data analysis system will subvert some existing restrictions to a certain extent and form a new paradigm of real-time data analysis.

07

future outlook

Integrated lake and warehouse

At present, Mango TV has realized the construction of a data warehouse integrating streams and batches, and the future focus will be on the construction of an integrated lake warehouse.

A data lake is characterized by the ability to store raw data of various types and formats, including structured data, semi-structured data, and unstructured data. A data warehouse is about structuring and organizing data to meet specific business needs.

The integration of lake and warehouse integrates the characteristics of data warehouse and data lake to create a unified data center and realize centralized management of data. The integrated architecture of lake and warehouse can provide better security, cost-effectiveness and openness. It can not only store and manage a large amount of raw data, but also organize the data into a structured form to facilitate analysis and query.

Through the integration of lake and warehouse, Mango TV can provide richer data services to the company, support business decision-making and innovation, and realize comprehensive control and management of data, including data collection, storage, processing and analysis. At the same time, the integration of lake and warehouse can also support the use of multiple computing engines and tools, such as Flink, Spark, Hive, etc., making data processing and analysis more flexible and efficient.

low code

The current development method is to write SQL submission tasks on the self-developed platform. When faced with some cleaning scenarios, most of this method is repetitive work, and there is a lot of room for improvement. Low code is a popular concept nowadays, and it has great advantages in reducing costs and increasing efficiency. Our next plan is to gradually realize low-code. The first stage is to connect the real-time platform with the data reporting platform. By reading the relevant metadata in the reporting platform, we can automatically generate corresponding data cleaning tasks, liberate productivity, and improve work. Efficiency and speed of delivery.

The advantage of low-code is that it can automate and simplify the repetitive work in the development process, reducing the coding workload of developers. Visually, developers can drag and configure to complete tasks without writing a lot of code. This not only improves development efficiency, but also reduces the risk of errors.

By implementing a low-code development approach, Mango TV will be able to speed up data processing and analysis and improve the overall efficiency of the team. In addition, low code can also reduce the technical requirements for developers, enabling more people to participate in data processing and analysis.

To sum up, based on the characteristics of Flink technology, Mango TV will focus on realizing the integrated architecture of lake and warehouse in the future data warehouse construction, so as to realize the comprehensive management and utilization of data. At the same time, Mango TV plans to gradually implement low-code development methods to improve development efficiency and delivery speed. These initiatives will further promote the development of Mango TV in the field of long video data analysis, and provide stronger support for business decision-making and innovation.


▼ " Apache Flink public account submission ", welcome everyone to participate in the long-term ▼

97623c3622a32ec1751ed38dc0daa5ba.png

Past Featured

632abfcaf8e70b1c60a50c193930036b.png

229b9310ec4b291c32ef78749679f675.png

7a07896ea710e6c0e9e48a4891d3bfeb.png

b38d730d99e62eb02eac1f29466a28fc.png

9c458be36aeb21d699030d85f210a2f6.jpeg


▼  " Event Review " Scan the picture below to watch the live broadcast replay

ef2a63c6588dfbd07c90b1cee31a5800.png

▼ Follow " Apache Flink " to get more technical dry goods ▼

f6b2be587e683f249341619cf6f125d8.png

 40454b7e9b9da7a0489bb1d91d911a70.gif  Click " Read the original text " to receive 5000CU* hours of Flink cloud resources for free

Guess you like

Origin blog.csdn.net/weixin_44904816/article/details/132157900