Abstract

The protection of user private data has long been the focus of AI security. We know that training machine learning models rely on large amounts of user data. However, user data often exists in the form of isolated islands that can not be integrated under many secure and legal constraints. The large-scale application of image steganalysis algorithms in real life is still not satisfactory due to the following challenges. First, it is difficult to aggregate all of the scattered steganographic images to train a robust classifier. Second, even if the images are encrypted, participants do not want irrelevant people to peek into the hidden information, resulting in the disclosure of private data. Finally, it is often impossible for different participants to train their tailored models. In this paper, we introduce a novel framework, referred to as FedSteg, to train a secure, personalized distributed model through federated transfer learning to fulfill secure image steganalysis. Extensive experiments on detecting several state-of-the-art steganographic methods i.e., WOW, S-UNIWARD, and HILL, validate that FedSteg achieves certain improvements compared to traditional non-federated steganalysis approaches. In addition, FedSteg is highly extensible and can be easily employed to various large-scale secure steganographic recognition tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call