Given how quickly artificial intelligence technology is developing, federated learning (FL) has emerged to enable effective model training while protecting data privacy. However, when using homomorphic encryption (HE) techniques for privacy protection, FL faces challenges related to the integrity of HE ciphertexts. In the HE-based privacy-preserving FL framework, the public disclosure of the public key and the homomorphic additive property of the HE algorithm pose serious threats to the integrity of the ciphertext of FL’s aggregated results. For the first time, this paper employs covert communication by embedding the hash value of the aggregated result ciphertext received by the client into the ciphertext of local model parameters using the lossless homomorphic additive property of the Paillier algorithm. When the server receives the ciphertext of the local model parameters, it can extract and verify the hash value to determine whether the ciphertext of the FL’s aggregated results has been tampered with. We also used chaotic sequences to select the embedding positions, further enhancing the concealment of the scheme. The experimental findings demonstrate that the suggested plan passed the Welch’s t-test, the K–L divergence test, and the K–S test. These findings confirm that ciphertexts containing covert information are statistically indistinguishable from normal ciphertexts, thereby affirming the proposed scheme’s effectiveness in safeguarding the integrity of the FL’s aggregated ciphertext results. The channel capacity of this scheme can reach up to 512 bits per round, which is higher compared to other FL-based covert channels.
Read full abstract