Abstract

Federated learning (FL) represents a novel privacy‐preserving learning paradigm that offers a practical solution for distributed privacy preservation. Although privacy‐preserving FL based on homomorphic encryption (HE‐PPFL) exhibits resistance to gradient leakage attacks while ensuring the accuracy of aggregation results, its widespread adoption in blockchain privacy preservation is hindered by the reliance on a trusted key generation center and secure transfer channels. Conversely, coverless steganography schemes effectively ensure the covert transmission of sensitive information across insecure channels. However, their incompatibility with HE‐PPFL arises from the lossy extraction process. To address these challenges, we present a decentralized federated learning privacy‐preserving framework based on the Lifted ElGamal threshold decryption cryptosystem. We introduce a reversible steganography method tailored to safeguard gradient privacy. Furthermore, we introduce a lightweight, secure blind aggregation algorithm founded on the Raft protocol, which serves to protect gradient privacy while substantially mitigating computational overhead. Finally, we provide rigorous theoretical proof of the security and correctness of our proposed scheme. Experimental results from four public data sets demonstrate that our proposed scheme achieves a 100% extraction accuracy without the need for lossless methods, while simultaneously reducing the computational cost of ciphertext gradient aggregation by at least three orders of magnitude. The FedSteg framework is publicly accessible at https://github.com/Xumeili/FedSteg. © 2024 Institute of Electrical Engineer of Japan and Wiley Periodicals LLC.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call