Recommender Systems in the Era of Large Language Models (LLMs)

This article is a series of articles on LLM, focusing on the translation of "Recommender Systems in the Era of Large Language Models (LLMs)".

Summary

With the prosperity of e-commerce and web applications, recommendation systems (RecSys) have become an important part of our daily lives, providing personalized recommendations that match user preferences. Although deep neural networks (DNN) have made significant progress in enhancing recommendation systems by modeling user-item interactions and incorporating their textual auxiliary information, these DNN-based methods still have some limitations, such as the difficulty in effectively understanding users' interests. and capturing textual auxiliary information, insufficient capabilities in generalizing to various visible/invisible recommendation scenarios and reasoning about its predictions, etc. At the same time, the emergence of large language models (LLMs), such as ChatGPT and GPT4, has revolutionized the fields of natural language processing (NLP) and artificial intelligence (AI) due to their superior capabilities in their fundamental responsibilities of language understanding and generation , as well as impressive generalization and reasoning abilities. Therefore, recent research attempts to leverage the power of LLM to enhance recommender systems. In view of the rapid development of this research direction in recommender systems, a systematic overview of existing LLM authorized recommender systems is urgently needed to provide in-depth understanding for researchers and practitioners in related fields. Therefore, in this investigation, we conducted a comprehensive review of the LLM authorized recommendation system from multiple aspects such as pre-training, fine-tuning, and prompts. More specifically, we first introduce representative methods that leverage the power of LLMs (as feature encoders) to learn user and item representations. Then, we review LLM's recent advanced techniques for enhancing recommender systems from three paradigms, namely, pre-training, fine-tuning, and hinting. Finally, we provide a comprehensive discussion of future directions in this emerging field.

1 Introduction

2 Related work

3 Deep representation learning based on LLM recommendation system

4 Pre-training and fine-tuning LLM for recommender systems

5 Tips for LLM for Recommendation Systems

6 future directions

6.1 Hallucination relief

6.2 Recommendation systems focus on trustworthy large language models

6.3 Recommendation system vertical field-specific LLM

6.4 User and project retrieval

6.5 Fine-tuning efficiency

6.6 Data enhancement

7 Conclusion

As one of the most advanced artificial intelligence technologies, LLM is used in various applications such as molecular discovery and finance due to its excellent language understanding and generation capabilities, strong generalization and reasoning capabilities, and rapid adaptation to new tasks and different fields. It was a huge success. Likewise, LLM’s recommendation system is constantly evolving to provide high-quality and personalized recommendation services. In view of the rapid development of this research topic in recommender systems, a systematic review of existing LLM-enabled recommender systems is urgently needed. To fill this gap, in this survey, we provide a comprehensive overview of LLM-enabled RecSys from the perspectives of pre-training, fine-tuning, and cueing paradigms in order to provide in-depth understanding to researchers and practitioners in related fields . However, the current research on RecSys LLM is still in its early stages, and more systematic and comprehensive research on LLM in this field is needed. Therefore, we also discuss some potential future directions in this field.

Guess you like

Origin blog.csdn.net/c_cpp_csharp/article/details/132895378