Abstract

Quality of Service (QoS) prediction has played an important role in selecting the optimal cloud service for users, and how to protect users’ privacy with high prediction accuracy has become the focus of attention in service computing. Although federated learning (FL) methods have been widely applied to protect user privacy, when a federated learning model is attacked by malicious users, it may lead to wrong prediction results. In order to protect both user privacy security and prediction model security, we propose a double security guaranteed matrix factorization model named DSGMF. In this model, we design a global gradient allocation method though contribution-based rewards. Meanwhile, to identify and remove potential free-riders, we explore free-rider attack and employ reputation-based detection method. Our proposed model is evaluated on a real-world QoS dataset, and the experimental results validate the effectiveness of our approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call