Abstract

Most of the commercial navigation products provide route planning service for users. However, they only consider a single metric such as distance, time, or other costs, while ignoring a critical criterion: safety. In a smart city, people may prefer to find a safe walking route to avoid the potential crime risk as well as obtain a short distance. This problem can be specified as a multi-objective optimization problem (MOOP). Many methods were proposed in the past to solve the multi-objective route planning, the multi-objective evolutionary approach (MOEA) is considered as the most popular one. However, MOEA is non-optimized when used in a large-scale road network and becomes computationally expensive when handling a large population size. In this paper, we propose a multi-objective hyper-heuristic (MOHH) framework for walking route planning in a smart city. In the search framework, we design a set of low level heuristics to generate new routes. Moreover, we adopt reinforcement learning mechanism to select good low-level heuristics to accelerate searching speed. We further improve the reinforcement learning-based multi-objective hyper-heuristic (RL-MOHH) algorithm and implement a parallel version (RL-PMOHH) on general purpose graphic process unit. Extensive experiments are conducted on the safety-index map constructed from the historical urban data of the New York city. Comprehensive experimental results show that the proposed RL-PMOHH is almost 173, 5.3, and 3.1 times faster than the exact multi-objective optimization algorithm, the RL-MOHH algorithm, and the parallel NSGA-II algorithm, respectively. Moreover, both RL-MOHH and RL-PMOHH can obtain more than 80% Pareto optimal solutions in a large-scale road network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call