KDD Cup 2020 Debiasing competition champion technical plan and its practice in Meituan advertising

ACM SIGKDD (International Conference on Data Mining and Knowledge Discovery, KDD for short) is the top international conference in the field of data mining.

 

Based on its own business scenarios, the search advertising algorithm team of the Meituan to-store advertising platform has been continuously optimizing and innovating cutting-edge technologies. The team’s strength, Hu Ke, Qi Yi, Qu Tan, Ming Jian, Bo Hang, Lei Jun and Tang Xingyuan from the University of Chinese Academy of Sciences jointly formed the participating team Aister, participated in the Debiasing, AutoGraph, and Multimodalities Recall three questions, and finally won the championship in the Debiasing circuit ( 1/1895), also won the championship (1/149) in the AutoGraph track, and won the third place in the Multimodalities Recall track (3/1433).

This article will introduce the technical solutions of Debiasing competition questions, as well as the team's application and research in the elimination of deviations in the advertising business.

background

The KDD Cup is a top international event in the field of data mining research hosted by SIGKDD. It has been held annually since 1997 and is currently the most influential event in the field of data mining. The competition is oriented to both the business and academic circles, gathering top experts, scholars, engineers, and students in the world's data mining industry to participate, providing data mining practitioners with a platform for academic exchanges and research results display. KDD Cup 2020 has a total of five competition questions (four tracks), which involve data bias (Debiasing), multimodalities recall (Multimodalities Recall), automated graph learning (AutoGraph), adversarial learning problems and reinforcement learning problems.

Figure 1 KDD 2020 conference

In the advertising system, how to eliminate data deviation is one of the most challenging problems, and it is also a research hotspot in the academic circle in recent years. With the continuous evolution of product form and algorithm technology, the system will continue to accumulate deviations. The search advertising algorithm team has made a breakthrough in the problem of data deviation, which has brought a significant improvement in business results. Especially in the Debiasing competition, based on the technical accumulation of the deviation elimination problem, the team won the first place from the fierce competition of 1895 teams around the world, and the final evaluation index (ndcg_half) leads the second place by 6.0%. Below we will introduce the technical solutions of Debiasing competition questions, as well as the team's application and research on the elimination of deviations in the advertising business. We hope to be helpful or inspiring to students engaged in related research.

Attachment: Open source code of technical solution

Figure 2 KDD Cup 2020 Debiasing competition TOP 10 list

Competition question introduction and problem analysis

Overview of deviation elimination issues

Most e-commerce and retail companies use massive amounts of data to implement search and recommendation systems on their websites to promote sales. With the development of this trend and the massive increase in traffic, various challenges have been created for recommendation systems. One of the challenges worth exploring is the issue of artificial intelligence fairness in the recommendation system [1,2], that is, if the machine learning system is equipped with short-term goals (such as short-term clicks, transactions), simply optimizing for short-term goals will Leading to a serious "Matthew effect", that is, popular products receive more attention, and less popular products will be more and more forgotten, resulting in a popular deviation in the system [3], and most models and systems are iteratively dependent Based on the Pageview data, the exposure data is a subset of the actual candidates selected by the model. Continuously relying on the data and feedback selected by the model for training will form a selective bias [3].

The accumulation of the above-mentioned popularity deviation and selective deviation will cause the "Matthew effect" in the system to become more and more serious. Therefore, the fairness of artificial intelligence is crucial to the continuous optimization of the recommendation system, and this will have a profound impact on the development of the recommendation system and the ecological environment.

Since it is not a well-defined optimization problem, bias elimination is a very challenging problem in current recommender systems, and it is also a research hotspot in current academic circles. This KDD competition is also based on the problem of deviation, based on the problem of next-item prediction (Next-Item Prediction) in e-commerce for unbiased estimation.

The contest official provides user click data, product multi-modal data, and user characteristic data. Among them, the user click data provides the product that the user has clicked in history and the timestamp of the click. The product multi-modal data is mainly the text vector and image vector of the product, and the user characteristic data includes the user's age, gender, city, etc. The data involved more than 1 million clicks, 100,000 products and 30,000 users. According to the time window, the data stages are divided into ten stages. The final score is based on the last three stages.

In order to focus on eliminating the problem of deviation, the evaluation indicators provided in this competition question include NDCG@50_full, NDCG@50_half, hitrate@50_full, hitrate@50_half. Two indicators, NDCG@50_full and NDCG@50_half, are used for evaluation.

  • NDCG@50_full : Consistent with the regular recommendation system evaluation index NDCG, the average ranking effect of the top 50 product lists recommended by each user request is evaluated on the entire evaluation data set. This evaluation set is called the full evaluation set.

  • NDCG@50_half : Focusing on the problem of bias, half of the clicked products with little historical exposure are extracted from the entire full evaluation data set, and the recommended list of these products is evaluated by the NDCG indicator. This evaluation set is called the half evaluation set.

The scoring first selects the top 10% teams through NDCG@50_full, and then uses NDCG@50_half among these teams for the final ranking. In the final evaluation, NDCG@50_half will evaluate the difference in Top rankings, and the more important evaluation method in the long-tail data prediction can better evaluate the players' optimization of the data deviation. Different from the traditional closed data set hit rate estimation problem (CTR estimation), the above data characteristics and evaluation methods focus on the optimization of deviations.

Data analysis and problem understanding

Data analysis and problems : There are a total of 35444 users in the user characteristic data, but only 6789 users have characteristics, so the characteristic coverage rate is only 19.15%. Because the coverage rate is low and there are only three characteristics of age, gender, and city, we find These features are useless for our entire task. There are a total of 117720 products in the product feature data, 108,916 products have text vectors and image vectors, and the coverage rate is as high as 92.52%. The text similarity and image similarity between products can be calculated based on the vector. Due to the difference in user information and product information Lack, how to make good use of these commodity multi-modal vectors is extremely important for the entire task.

Selective bias analysis : As shown in Table 1, we compare the recalled product candidate sets based on the i2i (item2item) click co-occurrence and the i2i vector similarity based on the two Item-Based collaborative filtering methods. Due to system performance limitations , We limit the maximum length of the candidate set to 1000, and we find that the two recall methods have a lower hitrate on the evaluation set, and no matter which method is used, the system has a larger selectivity bias, that is, recommendation The samples given to the user are selected according to the system, not all candidate sets. The true candidate set greatly exceeds the samples recommended to the user, resulting in selective bias in the training data.

Furthermore, we found that based on i2i clicks, there is a higher hitrate on the full evaluation set compared to the half evaluation set, indicating that it prefers popular products, and the hitrates on the full and half evaluation sets based on the i2i vector similarity are the same. It means that it has no preference for popularity. At the same time, the candidate set recalled by the two methods has only a 4% repetition rate. Therefore, we need to combine the two product relationships of click co-occurrence and vector similarity to generate a larger training set, thus Alleviate selectivity bias.

Table 1 Recall hitrate of i2i click co-occurrence and i2i vector similarity

As shown in Figure 3, we analyzed the popularity of commodities, where the abscissa is the frequency of commodity clicks, that is, the popularity of the commodity, and the ordinate is the number of commodities. In the figure, we truncated the popularity, and the maximum abscissa should be 228. It can be seen that the popularity of most of the commodities is low and conforms to the long tail distribution. The two box plots in the figure are the distribution of product popularity in the full evaluation data set and the distribution of product popularity in the half evaluation data set. From these two box plots, it can be seen that the popularity bias exists in the data set. Half of the evaluation data in the entire full evaluation set are based on products with lower popularity, while the other half of the evaluation data products have higher popularity. Clicking on a product to construct a sample will result in more popular positive products in the data, which will result in a popular deviation.

Figure 3 Popularity deviation of commodities

Problem challenge

The main challenge of this competition is to eliminate the bias in the recommendation system. From the above data analysis, it can be seen that there are two main types of biases, Selection Bias and Popularity Bias.

  • Selective bias: The exposure data is selected by the model and the system, and is inconsistent with all the candidate sets in the system [4,5].

  • Popularity deviation: The number of historical clicks on a product presents a long-tailed distribution. Therefore, the popularity deviation exists between the head product and the tail product. How to solve the popularity deviation is also one of the core challenges of the competition [6,7].

Based on the above deviations, the traditional use of Pageview (exposure) -> Click (click) click prediction modeling ideas can not reasonably model the real interests of users. In our preliminary attempts, we also found that the traditional modeling ideas are less effective . Different from traditional user interest modeling ideas, first of all, we use u2i2i (user2item2item) modeling conversion, and use i2i modeling instead of traditional CTR estimation method u2i (user2item) interest modeling. In addition, we use multi-hop walk based on i2i graphs to generate candidate samples instead of generating ideas based on Pageview samples. At the same time, we introduced popularity penalty in the composition process and i2i modeling process. Finally, the above deviation challenge was effectively solved.

Competition technical solution

Aiming at the challenges of selectivity deviation and popularity deviation, we carried out a modeling design to effectively optimize the above deviation. The existing CTR modeling method can be understood as the modeling of u2i, which usually describes the user's preference for candidate products in a specific request context, and our modeling method is to learn the user’s history of each clicked product and candidate product The relationship can be understood as the modeling of u2i2i. This modeling method is more helpful to learn a variety of i2i relationships, and can easily extend the one-hop relationship in the i2i graph to a multi-hop relationship, and multiple i2i relationships can explore more unbiased data to increase the product candidate set And the training set achieve the purpose of alleviating the selectivity bias.

At the same time, taking into account the popularity deviation caused by popular products, we introduced a popularity penalty to the edge weight in the composition process, so that there is more opportunity to explore low-popularity products when multi-jumping, and at the same time in the modeling process and post-processing In the process, we also introduced popularity penalty to ease the popularity deviation.

In the end, we formed a sorting framework based on i2i modeling. The framework diagram is shown in Figure 4. In our framework, the product recommendation process is divided into three stages. The first stage is to construct an i2i graph based on user behavior data and product multi-modal data, and perform multi-hop walks based on the i2i graph to generate i2i candidate samples; The first stage is to split the user’s click sequence, construct an i2i relational sample set based on the i2i candidate samples, automate feature engineering based on the i2i sample set, and use the popularity weighted loss function to model the elimination of popularity bias; the third stage According to the user's click sequence, the i2i scores generated by the i2i model are aggregated, and the scored product list is subjected to post-processing to eliminate the popularity deviation, so as to rank and recommend the product list. We will introduce the three-stage plan in detail.

Figure 4 Sorting framework based on i2i modeling

Generation of i2i candidate samples based on multi-hop walking

In order to explore more i2i unbiased candidate samples for i2i modeling, thereby alleviating the selection bias, we constructed an i2i graph with multiple edge relations, and introduced a popularity penalty in the edge construction process to eliminate popularity deviation. As shown in Figure 5 below, the construction of i2i graphs and the generation of multi-hop walk i2i candidate samples are divided into three steps: the construction of i2i graphs, i2i multi-hop walks and the generation of i2i candidate samples.

Figure 5 Generation of i2i candidate samples based on multi-hop walking

The first step is the construction of the i2i graph. There is one kind of node in the graph, namely the commodity node, and the two kinds of edge relations are click co-occurrence edge and multi-modal vector edge. The click co-occurrence side is constructed by the user’s historical product click sequence, and the weight of the side is obtained by the following formula. Based on the user’s historical click co-occurrence frequency between the two products, the time interval of each click co-occurrence is considered Factor, and added user activity penalty and product popularity penalty. The time interval factor takes into account that the shorter the co-occurrence time between the two products, the greater the similarity between the two products; the user activity penalty considers the fairness of active users and inactive users, and is measured by the number of user’s historical product clicks Penalize active users; the product popularity penalty considers the historical click frequency of the product, and punishes popular products to ease the popularity deviation [8].

The multi-modal vector edge is constructed by the cosine similarity of the text vector and the image vector between the two products. The K nearest neighbor method is used for the vector of a product to find the K nearest neighbors. For this product and its nearest neighbor K products are constructed with K edges respectively, and the similarity between vectors is the edge weight. Multi-modal vector edges have nothing to do with the popularity, which can alleviate the popularity deviation.

The second step is to explore multiple i2i relationships through multi-hop walks. We enumerate different combinations of one-hop i2i relationships to form different types of two-hop i2i relationships, and delete the original one-hop after constructing the two-hop i2i relationship. i2i relationship to avoid redundancy. The i2i relationship includes the construction of i2i based on the click-one-hop neighbor, the construction of i2i based on the vector-one-hop neighbor, the construction of i2i based on the click-click two-hop walk, the construction of i2i based on the click-vector two-hop walk, and the construction based on the vector-click two-hop walk i2i, the one-hop i2i relationship score is derived from the one-hop edge weight, and the multi-hop i2i relationship score is derived from the following formula, that is, multiplying the edge weights of each path to obtain the path score, and averaging the scores of all paths. Through the multi-hop wandering method of different edge types, more commodities have more opportunities to construct multi-hop relationships with other commodities, thereby expanding the product candidate set and alleviating the selection bias.

The third step is to sort and truncate the candidate product sets of all products according to the i2i score based on each i2i relationship. The similarity heat map between each i2i relationship is shown in Figure 6 below. The similarity is through two i2i relationships. Based on the calculation of the repetition degree of the constructed candidate set, we can determine the number of candidate product sets based on the similarity between different i2i relationships to obtain the i2i candidate set of each product in each i2i relationship for subsequent i2i modeling. .

Figure 6 Heat map of i2i relationship similarity

I2i modeling based on popularity deviation optimization

We use the u2i2i modeling conversion to convert the traditional u2i-based CTR estimation modeling method to the i2i modeling method, which can easily use multi-hop i2i relationships, and we introduce a loss function with popularity penalty to make the i2i model Learn in the direction of mitigating the bias in popularity.

As shown in Figure 7 below, we split the user's pre-click behavior sequence, and use each clicked product as a source item, and extract the target item from the multi-hop walk candidate set in the i2i graph to form an i2i sample set. For the target item set, we will introduce the label of the sample by determining whether the next product clicked by the user is consistent with the target item. In this way, we change the sequence modeling based on user selection [9] to i2i-based modeling, and introduce the user's sequence information from the side through the time difference between two product clicks and the click interval, emphasizing the learning of i2i, so as to achieve The purpose of eliminating selectivity bias. The end user's recommended product ranking list can be based on the user's i2i score to sort the target item.

Figure 7 i2i training sample generation

As shown in Figure 8, we use the idea of ​​automated feature engineering to explore high-level feature combinations, alleviating the problem of abstraction of business meaning from the bias problem. After artificially constructing some basic features such as frequency features, graph features, behavioral features, and time-related features, we divide these basic feature types into three types, categorical features, numerical features, and time features, and make high-level features based on these features. Feature combination. The features formed by each combination will be added to the next iteration of the combination to reduce the complexity of high-order combinations. We also perform fast feature selection based on the importance of features and NDCG@50_half, thereby digging deeper Mode and save a lot of labor costs.

Figure 8 Automation feature engineering

In terms of models, we tried LightGBM, Wide&Deep, timing models, etc. Finally, due to LightGBM's excellent performance on tabular, we chose LightGBM.

In model training, we use the weighted loss of commodity popularity to eliminate the popularity deviation [10], and the loss function L is shown in the following formula:

Among them, the parameter α is inversely proportional to the popularity, to weaken the weight of popular commodities, and eliminate the popularity deviation. The parameter β is the weight of the positive sample, which is used to solve the problem of sample imbalance.

User preference ranking

Finally, the user’s product preference ranking is to introduce i2i through the user’s history of clicking on the product, and then form the final ranking problem for all the products introduced by i2i. In the sorting process, as shown in Figure 7, the target item set is produced by each source item separately, so different source items and different multi-hop wandering i2i relationships may produce the same target item. We need to consider how to aggregate the model scores of the same target item of the same user. If the probability sum is directly performed, it will strengthen the popularity deviation, while directly taking the average value will easily ignore some strong signals. Finally, we use the maximum pooling method for multiple identical target items of a user, and then sort all the user's target items, which can achieve a good effect on NDCG@50_half.

In order to further optimize the NDCG@50_half indicator, we post-processed the target item score obtained, and further suppressed the high-popularity product by increasing the scoring weight of the low-popularity product, and finally achieved a better one on NDCG@50_half Effect, this is actually a trade-off between NDCG@50_full and NDCG@50_half.

evaluation result

In the process of generating i2i candidate samples based on multi-hop wandering, the hitrates of various i2i relationships are shown in Table 2. It can be found that mixing multiple methods with the same length of 1000 has a higher hitrate improvement. More unbiased data can be introduced to increase the training set and candidate set to alleviate the selectivity bias of the system.

Table 2 Hitrates of different i2i relationships

In the end, Aister, formed by the Meituan search advertising team, won the first place in all evaluation indicators including NDCG and hitrate. As shown in Table 3, NDCG@50_half is 6.0% higher than the second place, while NDCG@50_full It is 4.9% higher than the second place. Compared with NDCG@50_full, NDCG@50_half has a more obvious advantage, which shows that we have better optimized the problem of eliminating deviation.

Table 3 NDCG evaluation results of different participating team solutions

Advertising business application

The search algorithm team is responsible for the search advertising and screening list advertising business on the dual platforms of Meituan and Dianping. The business types involve catering, leisure and entertainment, beauty, and hotels. The rich business types bring great space and challenges to algorithm optimization.

In the search advertising business problem, the problem of data bias is an important and challenging problem. There are two important data deviations in the advertising system-location deviation and selective deviation. The search advertising algorithm team has also made more optimizations for these two deviations. The problem of position deviation, that is, the click rate at the front position is naturally higher than that at the rear position. Unlike the traditional way of handling deviations, we introduce the idea of ​​consistency modeling and achieve the consistency goal through flexible deep network design. Achieve improved business results.

Regarding the issue of selective deviation, the entire advertising system delivery process presents a funnel diagram, as shown in Figure 9. The system is divided into Matching, Creative-Select, Ranking, and Auction stages. The candidates for each stage are selected by the previous stage. Taking the ranking stage as an example (Ranking), the online system ranking candidates include all the candidates output in the matching (Matching) stage, but the training data of the ranking model is based on the exposure (Pageview) data selected by the model, which is only the online ranking system For a small subset of candidates, the difference between the online and offline input data of the model violates the assumption of consistency of the modeling distribution. The above-mentioned selectivity deviation will cause two obvious problems:

  1. Inaccurate model prediction: The model learned from the exposure samples is biased and inaccurate, which will lead to poor online prediction effects, especially for candidate samples with large differences in the distribution of historical exposure samples.

  2. The feedback link loop affects the advertising ecology: because the samples selected by the model are exposed, and then enter the model training to further select new exposure samples, the model continues to learn based on biased samples, which makes the overall feedback loop continue to be affected by deviations, and the selection of the system is increasing The narrower it is, the "Matthew effect" is formed.

Figure 9 Funnel diagram of advertising system

In order to solve the above estimation and ecological problems, we optimize the algorithm through sample generation and multi-stage training. In terms of sample generation, we carry out three aspects of data generation and sample selection. First, as shown in Figure 10, we use the Exploration algorithm based on Beta distribution to generate exploration candidates through historical click rates and statistical confidence. The assumption behind the algorithm is that the greater the confidence, the smaller the variance of the click rate.

As shown in the figure below, the horizontal axis represents the estimated click-through rate, and the vertical axis represents the probability density. The estimated click-through rate distribution of the sample generated by the Beta distribution of the parameter in the yellow box is close to the real sample distribution, which is used to supplement the only selected by the model Exposure data; secondly, we combine random walk to optimize negative samples, and control accuracy through sampling algorithms and label optimization. Finally, most of the training samples are selected by the main flow of the system, and the training samples selected after the next model optimization will change significantly. The above differences will also cause the accuracy of the small flow model in ABTest to not meet expectations. We also address the above differences The data distribution difference selected by the model is used for data selection.

Figure 10 Beta distribution of different parameters

In addition, combining the above-mentioned differences in the distribution of various samples, the model is optimized through multi-stage training. As shown in Figure 11, we control the training sequence and parameters based on the sample intensity, so that the training data is more consistent with the real candidate distribution on the line. In the end, not only did the two modules of CTR prediction model (Ranking stage) and creative selection model (Creative-Select stage) achieve more significant business effect improvement, and more consistent modeling methods also made the candidate expansion and other deviations heavier The problem experiment changed from negative to positive, and a more solid verification method also laid a solid foundation for future optimization.

Figure 11 Multi-stage training based on sample intensity

Summary and outlook

The KDD Cup is a competition closely connected with the industry. The annual competition questions are closely related to hot issues and practical issues in the industry. The Winning Solution produced over the years also has a great impact on the industry. For example, the KDD Cup 2012 winning program produced prototypes of FFM (Feild-aware Factorization Machine) and XGBoost, which have been widely used in the industry.

This year’s KDD Cup Debiasing problem is also one of the most challenging problems in the current advertising and recommendation field. This article introduces our solution to get the first place in the KDD Cup 2020 Debiasing. The solution is different from previous CTR estimates. We use the u2i2i method to convert u2i modeling to i2i modeling, and construct heterogeneous graphs to explore more unbiased samples through multi-hop walks, thereby alleviating the selection bias. In the process, the process of constructing the graph, the loss function of the model and the post-processing of the estimated value introduced the popularity penalty to alleviate the popularity deviation, and finally overcome the two challenges of the selection deviation and the popularity deviation.

At the same time, this article also introduces our business application on the problem of data selective deviation in Meituan search advertising. Previously, the advertising system has been optimized for the problem of deviation. This competition also gives us a direction for research on the problem of deviation. Further understanding. We hope that in the future work, we will further optimize the deviation problem in the advertising system based on the deviation optimization experience obtained in this competition, and make the advertising system more fair.

references

[1] Fairness in Recommender Systems

[2] Singh A, Joachims T. Fairness of exposure in rankings[C]//Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2018: 2219-2228.

[3] Stinson C. Algorithms are not Neutral: Bias in Recommendation Systems[J]. 2019.

[4] Ovaisi Z, Ahsan R, Zhang Y, et al. Correcting for Selection Bias in Learning-to-rank Systems[C]//Proceedings of The Web Conference 2020. 2020: 1863-1873.

[5] Wang X, Bendersky M, Metzler D, et al. Learning to rank with selection bias in personal search[C]//Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. 2016: 115-124.

[6] Abdollahpouri H, Burke R, Mobasher B. Controlling popularity bias in learning-to-rank recommendation[C]//Proceedings of the Eleventh ACM Conference on Recommender Systems. 2017: 42-46.

[7] Abdollahpouri H, Mansoury M, Burke R, et al. The impact of popularity bias on fairness and calibration in recommendation[J]. arXiv preprint arXiv:1910.05755, 2019.

[8] Schafer J B, Frankowski D, Herlocker J, et al. Collaborative filtering recommender systems[M]//The adaptive web. Springer, Berlin, Heidelberg, 2007: 291-324.

[9] Zhang S, Tay Y, Yao L, et al. Next item recommendation with self-attention[J]. arXiv preprint arXiv:1808.06414, 2018.

[10] Yao S, Huang B. Beyond parity: Fairness objectives for collaborative filtering[C]//Advances in Neural Information Processing Systems. 2017: 2921-2930.

About the Author

Strong, Mingjian, Hu Ke, Qu Tan, Lei Jun, etc., all come from the search advertising algorithm team of the Meituan advertising platform.

----------  END  ----------

Job Offers

The search advertising algorithm team of the Meituan advertising platform is based on the search advertising scene, exploring the most cutting-edge technological development of deep learning, reinforcement learning, artificial intelligence, big data, knowledge graphs, NLP and computer vision, and exploring the value of local life service e-commerce. The main work directions include:

Triggering strategy : user intention recognition, advertising business data understanding, Query rewriting, deep matching, correlation modeling.

Quality estimation : modeling of advertising quality. Estimated click-through rate, conversion rate, customer unit price, and transaction volume.

Mechanism design : advertising ranking mechanism, bidding mechanism, bid suggestion, traffic estimation, budget allocation.

Creative optimization : intelligent creative design. Optimize the display creativity of advertising pictures, text, group orders, discount information, etc.

job requirements:

  • Have more than three years of relevant work experience, and have application experience in at least one aspect of CTR/CVR estimation, NLP, image understanding, and mechanism design.

  • Familiar with commonly used machine learning, deep learning, and reinforcement learning models.

  • Excellent logical thinking ability, passion for solving challenging problems, sensitive to data, and good at analyzing/solving problems.

  • Master degree or above in computer and mathematics related majors.

The following conditions are preferred:

  • Have relevant business experience in advertising/search/recommendation.

  • Have experience in large-scale machine learning.

Interested students can submit their resumes to: [email protected] (please indicate the title of the email: Guangping Search Team).

Maybe you still want to watch

Scenario-based targeted sorting mechanism of Meituan Dianping Alliance advertising

The design and implementation of the real-time index of Meituan Dianping ads

Design and Implementation of Meituan Dianping Performance Advertising Experimental Configuration Platform

Guess you like

Origin blog.csdn.net/MeituanTech/article/details/108138537