This study proposes a novel personalized recommendation system leveraging Large Language Models (LLMs) to integrate semantic understanding with user preferences [1]. The system addresses critical challenges in traditional recommendation approaches by harnessing LLMs' advanced natural language processing capabilities. We introduce a framework combining a fine-tuned Roberta semantic analysis model with a multi-modal user preference extraction mechanism.The LLM component undergoes domain adaptation using Masked Language Modeling on a corpus of 112,000 user reviews from the MyAnimeList dataset, followed by task-specific fine-tuning using contrastive learning. User preferences are modeled through a weighted combination of explicit ratings, review sentiment, and implicit feedback, incorporating temporal dynamics through a time-decay function. Experimental results demonstrate significant improvements over state-of-the-art baselines, including Matrix Factorization, Neural Collaborative Filtering, BERT4Rec, and LightGCN. Our LLM-powered system achieves an 8.6%increase in NDCG@10 and a 10.5% improvement in Mean Reciprocal Rank compared to the best-performing baseline. Ablation studies reveal the synergistic effect of integrating LLM-based semantic understanding with user preference modeling. Case studies highlight the system's ability to recommend long-tail items and provide cross-genre suggestions, showcasing its capacity for nuanced content understanding. Scalability analysis indicates that while the LLM-based approach has higher initial computational costs, its performance scales comparably to other deep learning models for larger datasets. This research contributes to the field by demonstrating the effectiveness of LLMs in enhancing recommendation accuracy and diversity. Future work will explore advanced LLM architectures, multi-modal data integration, and techniques to improve computational efficiency and interpretability of recommendations.
Read full abstract