Most federated learning-based recommender systems allow clients to access a well-trained high-quality model locally, which provides adversaries with the opportunity to infringe the legitimate copyright of the model. In response, we study an emerging and important problem, i.e., copyright protection of a federated recommendation model, which has not yet been addressed in the community of federated learning or recommender systems. We propose the first backdoor-based ownership verification scheme for federated recommendation called OVFR, which enables the server to claim its ownership for a given suspicious recommendation model. Firstly, we propose to generate a trigger set tailored to recommendation scenarios. In particular, we generate some fake users and items, and then construct a set of fake users with fake interaction records as a trigger set. Moreover, we ensure that the distribution of the popularity of the fake items follows a long-tailed distribution for the effectiveness of the incorporated watermarking. To provide robustness assurance, we propose two different hybrid strategies to make the embeddings of the fake items similar to those of the real items. Secondly, we focus on effectively learning from a trigger set for recommendation scenarios. In particular, we design a mean square error (MSE) loss function and a contrastive loss function for incorporating the backdoor-based watermarking into the item embeddings, since the item embeddings are often more valuable and easier to be accessed than other parameters of a federated recommendation model. We then design a contrastive loss function to reduce the risk of the fake items being detected. Extensive experiments on three public datasets show the effectiveness of our OVFR in terms of ownership verification, model performance, and robustness.
Read full abstract