Abstract
AbstractAs deep learning evolves, neural network structures become increasingly sophisticated, bringing a series of new optimisation challenges. For example, deep neural networks (DNNs) are vulnerable to a variety of attacks. Training neural networks under privacy constraints is a method to alleviate privacy leakage, and one way to do this is to add noise to the gradient. However, the existing optimisers suffer from weak convergence in the presence of increased noise during training, which leads to a low robustness of the optimiser. To stabilise and improve the convergence of DNNs, the authors propose a neural dynamics (ND) optimiser, which is inspired by the zeroing neural dynamics originated from zeroing neural networks. The authors first analyse the relationship between DNNs and control systems. Then, the authors construct the ND optimiser to update network parameters. Moreover, the proposed ND optimiser alleviates the non‐convergence problem that may be suffered by adding noise to the gradient from different scenarios. Furthermore, experiments are conducted on different neural network structures, including ResNet18, ResNet34, Inception‐v3, MobileNet, and long and short‐term memory network. Comparative results using CIFAR, YouTube Faces, and R8 datasets demonstrate that the ND optimiser improves the accuracy and stability of DNNs under noise‐free and noise‐polluted conditions. The source code is publicly available at https://github.com/LongJin‐lab/ND.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.