Conversational recommender systems (CRSs) have garnered increasing attention for their ability to provide personalized recommendations through natural language interactions. Although large language models (LLMs) have shown potential in recommendation systems owing to their superior language understanding and reasoning capabilities, extracting and utilizing implicit user preferences from conversations remains a formidable challenge. This paper proposes a method that leverages LLMs to extract implicit preferences and explicitly incorporate them into the recommendation process. Initially, LLMs identify implicit user preferences from conversations, which are then refined into fine-grained numerical values using a BERT-based multi-label classifier to enhance recommendation precision. The proposed approach is validated through experiments on three comprehensive datasets: the Reddit Movie Dataset (8413 dialogues), Inspired (825 dialogues), and ReDial (2311 dialogues). Results show that our approach considerably outperforms traditional CRS methods, achieving a 23.3% improvement in Recall@20 on the ReDial dataset and a 7.2% average improvement in recommendation accuracy across all datasets with GPT-3.5-turbo and GPT-4. These findings highlight the potential of using LLMs to extract and utilize implicit conversational information, effectively enhancing the quality of recommendations in CRSs.
Read full abstract