Abstract
How to effectively mitigate the bias of feedback in recommender systems is an important research topic. In this article, we first describe the generation process of the biased and unbiased feedback in recommender systems via two respective causal diagrams, where the difference between them can be regarded as the source of system-induced biases. We then define this difference as a confounding bias and propose a new perspective on debiased representation learning to alleviate it. Specifically, for the case with biased feedback alone, we derive the conditions that need to be satisfied to obtain a debiased representation from the causal diagrams. Then, we propose a novel framework called debiased information bottleneck (DIB) to optimize these conditions and then find a tractable solution for it. The proposed framework constrains the model to learn a biased embedding vector with independent biased and unbiased components in the training phase, and uses only the unbiased component in the test phase to deliver more accurate recommendations. We further propose a variant of DIB by relaxing the independence between the biased and unbiased components. Finally, we conduct extensive experiments on a public dataset and a real product dataset to verify the effectiveness of the proposed framework.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have