Abstract

Federated learning is usually utilized as a fraud detection framework in the domain of financial risk management, which promotes the model accuracy without training data exchange. One of the challenges in federated learning is the GAN-based poisoning attack. The GAN-based poisoning attack is a type of intractable poisoning attack that causes global model accuracy degradation and privacy leak. Most of the existing defenses for GAN-based poisoning attack have the three problems: 1) dependence on validation datasets; 2) incompetence of dealing with incremental poisoning attack; and 3) privacy leak. To address the above problems, we present a privacy-aware and incremental defense (PID) method to detect malicious participants and protect privacy. In PID, we design a method to accumulate the offset of model parameters from participants in all current epochs to represent the moving tendency for model parameters. Thus, we can distinguish the adversaries from normal participants based on the accumulations in this incremental poisoning attack. We also use multiple trust domains to reduce the rate of misjudging benign participants as adversaries. Moreover, a differentiated differential privacy is utilized before the global model sending to protect the privacy of participants’ training datasets in PID. The experiments conducted on two real-world datasets under financial fraud detection scenario demonstrate that the PID reduces the fallout of adversaries detection (the rate of misjudging benign participants as adversaries) by at least 51.1% and improve the speed of detecting all malicious participants by at least 33.4% compared with two popular defense methods. Besides, the privacy preserving of PID is also effective.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call