Abstract

This paper explores the computational promise of enhancing Simultaneous Recurrent Neural networks with a stochastic search mechanism as static optimizers. Successful application of Simultaneous Recurrent Neural networks to static optimization problems, where the training had been achieved through one of a number of deterministic gradient descent algorithms including Recurrent Backpropagation, Backpropagation and Resilient Propagation, was recently reported in the literature. Accordingly at the present time, it became highly desirable to assess if enhancing the neural optimization algorithm with a stochastic search mechanism would be of substantial utility and value, which is the focus of the study reported in this paper. Two techniques are employed to assess the added value of a potential enhancement through a stochastic search mechanism: one method entails comparison of SRN performance with a stochastic search algorithm, the Genetic Algorithm, and the second method leverages estimation for the quality of optimal solutions through Held-Karp bounds. The Traveling Salesman Problem is employed as the benchmark for the simulation study reported herein. Simulation results suggest that there is likely to be significant improvement possible in the quality of solutions for the Traveling Salesman problem, and potentially other static optimization problems, if the Simultaneous Recurrent Neural network is augmented with a stochastic search mechanism.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.