Abstract

With the development of positioning technology, massive GPS trajectory data are obtained to provide location-based services. However, most GPS trajectory data are obtained by a fixed sampling rate, which may lead to tremendous data redundancy, causing huge communication overhead, storing and computing issues, and high battery consumptions of mobile devices. In this paper, we propose an Adaptive Sampling method by Reinforcement Learning called ASRL. ASRL adjusts the sampling rate based on object moving status, aiming at reducing the size of the GPS trajectory without sacrificing the tracking accuracy. ASRL follows an actor-critic reinforcement learning framework and learns a GPS sampling policy network. The proper reward function in ASRL is generated by utilizing the Inverse Reinforcement Learning (IRL), which learns from the map matching results of historical trajectories and estimates the importance of each object moving status feature from demonstrations. The proposed ASRL method is evaluated by using three real GPS trajectory datasets. The result shows that the ASRL method can reduce more than 95% of GPS points while keeping the reasonable trajectory accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call