AbstractAs deep learning evolves, neural network structures become increasingly sophisticated, bringing a series of new optimisation challenges. For example, deep neural networks (DNNs) are vulnerable to a variety of attacks. Training neural networks under privacy constraints is a method to alleviate privacy leakage, and one way to do this is to add noise to the gradient. However, the existing optimisers suffer from weak convergence in the presence of increased noise during training, which leads to a low robustness of the optimiser. To stabilise and improve the convergence of DNNs, the authors propose a neural dynamics (ND) optimiser, which is inspired by the zeroing neural dynamics originated from zeroing neural networks. The authors first analyse the relationship between DNNs and control systems. Then, the authors construct the ND optimiser to update network parameters. Moreover, the proposed ND optimiser alleviates the non‐convergence problem that may be suffered by adding noise to the gradient from different scenarios. Furthermore, experiments are conducted on different neural network structures, including ResNet18, ResNet34, Inception‐v3, MobileNet, and long and short‐term memory network. Comparative results using CIFAR, YouTube Faces, and R8 datasets demonstrate that the ND optimiser improves the accuracy and stability of DNNs under noise‐free and noise‐polluted conditions. The source code is publicly available at https://github.com/LongJin‐lab/ND.
Read full abstract