Abstract

Recommender systems face longstanding challenges in gaining users’ trust due to the unreliable information caused by profile injection or human misbehavior. Traditional solutions to those challenges focus on leveraging users’ social relationships for inferring the user preference, i.e., recommending items according to the preference by user’s trusted friends; or adding random noise to the input to improve the robustness of the recommender systems. However, such approaches cannot defend the real-world noises like fake ratings. The recommender model is generally built upon all the user-item interactions, which incorporates the information from fake ratings or spammer groups, that neglects the reliability of the ratings. To address the above challenges, we propose an adversarial training approach in this work. In details, our approach includes two components: a predictor that infers the user preference; and a discriminator that enforces cohort rating patterns. In particular, the predictor applies an encoder-decoder structure to learn the shared latent information from sparse users’ ratings and trust relationships; the discriminator enforces the predictor to provide ratings as coherent with the cohort rating patterns. Our extensive experiments on three real-world datasets show the advantages of our approach over several competitive baselines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call