Due to the easy access, implicit feedback is often used for recommender systems. Compared with point-wise learning and pair-wise learning methods, list-wise rank learning methods have superior performance for top- \(N\) recommendation. Recent solutions, especially the list-wise methods, simply treat all interacted items of a user as equally important positives and annotate all no-interaction items of a user as negatives. For the list-wise approaches, we argue that this annotation scheme of implicit feedback is over-simplified due to the sparsity and missing fine-grained labels of the feedback data. To overcome this issue, we revisit the so-called positive and negative samples. First, considering the loss function of list-wise ranking, we analyze the impact of false positives and negatives theoretically. Second, based on the observation, we propose a self-adjusting credibility weight mechanism to re-weigh the positive samples and exploit the higher-order relation based on item–item matrix to sample the critical negative samples. In order to prevent the introduction of noise, we design a pruning strategy for critical negatives. Besides, to combine the reconstruction loss function for the positive samples and critical negative samples, we develop a simple yet effective VAEs framework with linear structure, which abandons the complex non-linear structure. Extensive experiments are conducted on six public real-world datasets. The results demonstrate that, our VAE* outperforms other VAE-based models by a large margin. Besides, we also verify the effect of denoising positives and exploring critical negatives by ablation study.
Read full abstract