Abstract

Aiming at Travel Salesman Problem (TSP) that ant colony algorithm is easy to fall into local optima and slow convergence, a multi-colony ant algorithm using both generative adversarial nets (GAN) and adaptive stagnation avoidance strategy (GAACO) is proposed. First, to improve the convergence speed of the algorithm, we introduce a GAN model based on the game between convergence speed and solution quality. Then, to overcome premature convergence, an adaptive stagnation avoidance strategy is proposed. The strategy consists of two parts: (1) information entropy. It is used to measure the diversity of GAACO; (2) a cooperative game model. When the value of information entropy is less than threshold value, the cooperative game model will be used to select the appropriate pheromone matrix for different colonies to improve the accuracy. Finally, to further accelerate the convergence of the algorithm, the initial pheromone matrix is preprocessed to increase the pheromone of the optimal path for each iteration in the early stage. And according to reinforcement learning method, each colony increases the pheromone of the global optimal path at the end of each iteration. Extensive experiments with numerous instances in the TSPLIB standard library show that the proposed methods significantly outperform the state-of-the-art multi-colony ant colony optimization algorithms, especially in the large-scale TSPs.

Highlights

  • In 1996, Italian scholars M.Dorigo et al were inspired by the foraging behavior of ant colonies in nature

  • In 1997, M.Dorigo et al put forward Ant Colony System (ACS) to improve ant system (AS) based on Q-learning, which adopts two methods: local pheromone update and global pheromone update [2]

  • When the information entropy of algorithm is less than threshold value L, the adaptive stagnation avoidance strategy is introduced to re-match the pheromone matrix of multicolony, which is convenient to find the optimal solution and avoid the algorithm falling into local optima to some extent

Read more

Summary

INTRODUCTION

In 1996, Italian scholars M.Dorigo et al were inspired by the foraging behavior of ant colonies in nature. Chen et al presented a modified multi-colony ant algorithm, based upon a pheromone arithmetic crossover and a repulsive operator Iteration of this algorithm can avoid some stagnating states of basic ant colony optimization [16]. According to the game relationship between the convergence speed and the quality of the solution, the proposed method applies the GAN model to the ant colony algorithm and proposes a multi-colony ant colony optimization algorithm based on the generative adversarial nets model and adaptive stagnation avoidance strategy (GAACO) algorithm. Reinforcement learning method: At the end of an iteration, enhancing the pheromone of the global optimal solution for each colony This operation makes the ants gather in a path faster to improve the convergence speed of the algorithm.

RELATED WORK
ANT COLONY SYSTEM
MULTI-COLONIES COMMUNICATION STRATEGY
14. Update the pheromone of the global optimal solution
EXPERIMENT AND SIMULATION
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call