Abstract

Deep neural networks will be affected by various noises in different scenes. Traditional deep neural networks often use gradient descent algorithms to update parameter weights. When the gradient falls to a certain range, it is easy to fall into the local optimal solution. Although the impulse method and other methods can escape from local optimization in some scenarios, they still have some limitations, which will greatly reduce the application effect of the actual scenes. To solve the above problems, a two-stream neural network with different gradient update strategies was proposed. Combined with the gradient ascent algorithm, this method alleviated the disadvantage of deep neural networks falling into local optimality and increased the robustness of neural networks to a certain extent. The experimental results on the CIFAR10 dataset verify that the proposed method can improve the accuracy of various gradient descent optimizers by about 1%, such as SGD, Adagrad, RM-Sprop and Adam. The experimental results on the COCO dataset show that the accuracy of the proposed method is also improved compared with the baseline models PAA and EfficientDet. The method proposed can be widely used in various neural network structures and has good practical significance and application prospects.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call