Abstract

Deep learning is widely used in the medical field owing to its high accuracy in medical image classification and biological applications. However, under collaborative deep learning, there is a serious risk of information leakage based on the deep convolutional generation against the network's privacy protection method. Moreover, the risk of such information leakage is greater in the medical field. This paper proposes a deep convolution generative adversarial networks (DCGAN) based privacy protection method to protect the information of collaborative deep learning training and enhance its stability. The proposed method adopts encrypted transmission in the process of deep network parameter transmission. By setting the buried point to detect a generative adversarial network (GAN) attack in the network and adjusting the training parameters, training based on the GAN model attack is forced to be invalid, and the information is effectively protected.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.