Abstract

Artificial neural networks have achieved great success in artificial intelligence. Back propagation is one of the most widely used parameter search techniques for artificial neural networks, and it has been widely used in many deep learning approaches such as convolutional neural networks (CNNs), generator adversarial networks (GANs) and recurrent neural networks (RNNs) to train their fully-connected layers. However, the fully-connected layers of the neural networks with back propagation technique depends on the gradient descent strategy, which has the disadvantage of easily falling into local optima. Meanwhile, many evolutionary computation (EC) techniques are applied to the parameters optimization of neural networks. They take advantage of populations to achieve global search. But their convergence speed is relatively slow since they use their original evolutionary search strategies. In this paper, we propose a self-adaptive gradient descent search algorithm (SaGDSA) to search the parameters for fully-connected neural networks. Different from the existing adaptive gradient descent algorithm, the proposed algorithm does not require the user to have any prior knowledge and manually design the learning rate of different stages to match the different search stages. In addition, four kinds of gradient descent strategies are used to optimize parameters of fully-connected neural networks. All used datasets are collected from the University of California Irvine (UCI) machine learning repository. The experimental results indicate that SaGDSA is better than the comparison algorithms on most of the datasets. By introducing the self-adaptive mechanism, SaGDSA not only has excellent global search capabilities from evolutionary computation but also has local search capabilities from gradient descent.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call