Abstract

Artificial Neural Networks research field is among the areas of major activity in Artificial Intelligence. Training a neural network is an NP-hard optimization problem that presents several theoretical and computational limitations. In optimization, continuation refers to an homotopy transformation of the fitness function that is used to obtain simpler versions of such fitness function and improve convergence. In this paper we propose an approach for Artificial Neural Network training based on optimization by continuation and meta-heuristic algorithms. The goal is to reduce overall execution time of training without causing negative effects in accuracy. We use continuation together with Particle Swarm Optimization, Firefly Algorithm and Cuckoo Search for training neural networks on public benchmark datasets. The continuation variations of the studied meta-heuristic algorithms reduce execution time required to complete training in about 5–30% without statistically significant loss of accuracy when compared with standard variations of the meta-heuristics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call