In collaborative filtering, matrix factorization and collaborative metric learning are challenged by situations where non-preferred items may appear so close to a user in the feature embedding space that they lead to degrading the recommendation performance. We call such items ‘potential impostor’ risks. Addressing the issues with ‘potential impostor’ is important because it can result in inefficient learning and poor feature extraction. To achieve this, we propose a novel loss function formulation designed to enhance learning efficiency by actively identifying and addressing impostors, leveraging item associations and learning the distribution of negative items. This approach is crucial for models to differentiate between positive and negative items effectively, even when they are closely aligned in the feature space. Here, a loss function is generally an objective optimization function that is defined based on user–item interaction data, through either implicit or explicit feedback. The loss function essentially decides how well a recommendation algorithm performs. In this paper, we introduce and define the concept of ‘potential impostor’, highlighting its impact on learned representation quality and algorithmic efficiency. We tackle the limitations of non-metric methods, like the Weighted Approximate Rank Pairwise Loss (WARP) method, which struggles to capture item–item similarities, by using a ‘similarity propagation’ strategy with a new loss term. Similarly, we address fixed margin inefficiencies in Weighted Collaborative Metric Learning (WCML), through density distribution approximation. This moves potential impostors away from the margin for more robust learning. Additionally, we propose a large-scale batch approximation algorithm for increased detection of impostors, coupled with an active learning strategy for improved top-N recommendation performance. Our extensive empirical analysis across five major and diverse datasets demonstrates the effectiveness and feasibility of our methods, compared to existing techniques with respect to improving AUC, reducing impostor rate, and increasing the average distance metrics. More specifically, our evaluation shows that our two proposed methods outperform the existing state-of-the-art techniques, with an improvement of AUC by 3.5% and 3.7%, NDCG by 1.0% and 9.1% and HR by 1.3% and 3.6%, respectively. Similarly, the impostor rate is decreased by 35% and 18%, and their average distance is increased by 33% and 37%, respectively.
Read full abstract