Abstract

The back-propagation (BP) technique is a widely used technique to train artificial neural networks (ANNs). However, BP often gets trapped in a local optimum. Hence, hybrid training was introduced, e.g., a global optimization algorithm with BP, to address this drawback. The key idea of hybrid training is to use global optimization algorithms to provide BP with good initial connection weights. In hybrid training, evolutionary algorithms are widely used, whereas ant colony optimization (ACO) algorithms are rarely used, as the global optimization algorithms. And so far, only the basic ACO algorithm has been used to evolve the connection weights of ANNs. In this paper, we hybridize one of the best performing variations of ACO with BP. The difference of the improved ACO variation from the basic ACO algorithm lies in that pheromone trail limits are imposed to avoid stagnation behaviour. The experimental results show that the proposed training method outperforms other peer training methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call