Abstract

How to construct imperceptible (realistic) fake samples is critical in adversarial attacks. Due to the sample feature diversity of a recommender system (containing both discrete and continuous features), traditional gradient-based adversarial attack methods may fail to construct realistic fake samples. Meanwhile, most recommendation models adopt click-through rate (CTR) predictors, which usually utilize black-box deep models with discrete features as input. Thus, how to efficiently construct realistic fake samples for black-box recommender systems is still full of challenges. In this article, we propose a hierarchical adversarial attack method against black-box CTR models via generating realistic fake samples, named CTRAttack. To better train the generation network, the weights of its embedding layer are shared with those of the substitute model, with both the similarity loss and classification loss used to update the generation network. To ensure that the discrete features of the generated fake samples are all real, we first adopt the similarity loss to ensure that the distribution of the generated perturbed samples is sufficiently close to the distribution of the real features, and then the nearest neighbor algorithm is used to retrieve the most appropriate features for non-existent discrete features from the candidate instance set. Extensive experiments demonstrate that CTRAttack can not only effectively attack the black-box recommender systems but also improve the robustness of these models while maintaining prediction accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call