Abstract

Currently many non-tractable considered problems have been solved satisfactorily through methods of approximate optimization called metaheuristic. These methods use non-deterministic approaches that find good solutions which, however, do not guarantee the determination of the global optimum. The success of a metaheuristic is conditioned by capacity to adequately alternate between exploration and exploitation of the solution space. A way to guide such algorithms while searching for better solutions is supplying them with more knowledge of the solution space (environment of the problem). This can to be made in terms of a mapping of such environment in states and actions using Reinforcement Learning. This paper proposes the use of a technique of Reinforcement Learning - Q-Learning Algorithm - for the constructive phase of GRASP and Reactive GRASP metaheuristic. The proposed methods will be applied to the symmetrical traveling salesman problem.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.