Abstract

Effective Bayesian network structure learning algorithms, such as the Peter and Clark (PC) algorithm, must be designed and used with data integrity as a top priority. Deep learning models can be jointly built by thousands of participants; thanks to the innovative distributed learning technology known as federated learning. This research proposes a technique for the detection of data poisoning attacks along with enhancing secure data transmission and routing by utilizing the integration of the PC algorithm and federated learning. Data integrity of the network is likewise enhanced using the Convergence of the PC algorithm. Detection of attacks in established secure data transmission is carried out using Bayesian adversarial federated learning. We provided an optimization-based model poisoning approach and introduced adversarial neurons into the redundant area of a neural network by assessing model capacity. It should be highlighted that while those redundant neurons have little relevance to the primary goal of federated learning, they are crucial for poisoning attacks. Numerical tests show that the suggested approach can get beyond defense mechanisms and have a high attack success rate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call