Abstract

Artificial neural networks (ANN) again are playing a leading role in machine learning, especially in classification and regression processes, due to the emergence of deep learning (ANNs with more than four hidden layers), allowing them to encode more and more complex features. The increase in the number of hidden layers in ANNs has posed important challenges in their training. Variations (e.g. RMSProp) of classical algorithms such as backpropagation with its stochastic gradient descent are the state of the art for training deep ANNs. However, other research has shown that the advantages of metaheuristics need more detailed study in this area. We summarize the design and use of a framework to optimize learning of deep neural networks in TensorFlow using metaheuristics, a framework implemented in Python that allows training of the networks in CPU or GPU depending on the TensorFlow configuration and allows easy integration of diverse classification and regression problems solved with different neural networks architectures (conventional, convolutional and recurrent) and new metaheuristics. The framework initially includes Particle Swarm Optimization, Global-best Harmony Search, and Differential Evolution. It further enables the conversion of metaheuristics into memetic algorithms including exploitation processes using the algorithms available in TensorFlow: RMSProp, Adam, Adadelta, Momentum, and Adagrad.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call