Abstract
Cold-start has become critical for recommendations, especially for sparse user-item interactions. Recent approaches based on meta-learning succeed in alleviating the issue, owing to the fact that these methods have strong generalization, so they can fast adapt to new tasks under cold-start settings. However, these meta-learning-based recommendation models learned with single and spase ratings are easily falling into the meta-overfitting, since the one and only rating <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$r_{ui}$</tex-math></inline-formula> to a specific item <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$i$</tex-math></inline-formula> cannot reflect a user's diverse interests under various circumstances(e.g., time, mood, age, etc), i.e. if <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$r_{ui}$</tex-math></inline-formula> equals to 1 in the historical dataset, but <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$r_{ui}$</tex-math></inline-formula> could be 0 in some circumstance. In meta-learning, tasks with these single ratings are called Non-Mutually-Exclusive(Non-ME) tasks, and tasks with diverse ratings are called Mutually-Exclusive(ME) tasks. Fortunately, a meta-augmentation technique is proposed to relief the meta-overfitting for meta-learning methods by transferring Non-ME tasks into ME tasks by adding noises to labels without changing inputs. Motivated by the meta-augmentation method, in this paper, we propose a cross-domain meta-augmentation technique for content-aware recommendation systems (MetaCAR) to construct ME tasks in the recommendation scenario. Our proposed method consists of two stages: meta-augmentation and meta-learning. In the meta-augmentation stage, we first conduct domain adaptation by a dual conditional variational autoencoder (CVAE) with a multi-view information bottleneck constraint, and then apply the learned CVAE to generate ratings for users in the target domain. In the meta-learning stage, we introduce both the true and generated ratings to construct ME tasks that enables the meta-learning recommendations to avoid meta-overfitting. Experiments evaluated in real-world datasets show the significant superiority of MetaCAR for coping with the cold-start user issue over competing baselines including cross-domain, content-aware, and meta-learning-based recommendations.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Knowledge and Data Engineering
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.