Abstract

AbstractFederated learning (FL) is widely used in edge‐cloud collaborative training due to its distributed architecture and privacy‐preserving properties without sharing local data. FLTrust, the most state‐of‐the‐art FL defense method, is a federated learning defense system with trust guidance. However, we found that FLTrust is not very robust. Therefore, in the edge collaboration scenario, we mainly study the poisoning attack on the FLTrust defense system. Due to the aggregation rule, FLTrust, with trust guidance, the model updates of participants with a significant deviation from the root gradient direction will be eliminated, which makes the poisoning effect on the global model not obvious. To solve this problem, under the premise of not being deleted by the FLTrust aggregation rules, we construct malicious model updates that deviate from the trust gradient to the greatest extent to achieve model poisoning attacks. First, we utilize the rotation of high‐dimensional vectors around axes to construct malicious vectors with fixed orientations. Second, the malicious vector is constructed by the gradient inversion method to achieve an efficient and fast attack. Finally, a method of optimizing random noise is used to construct a malicious vector with a fixed direction. Experimental results show that our attack method reduces the model accuracy by 20%, severely undermining the usability of the model. Attacks are also successful hundreds of times faster than the FLTrust adaptive attack method.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.