Abstract

In this article we study how we can transfer knowledge between mobility models that represent different locations and means of transport. Specifically, we propose the use of knowledge distillation and fine-tuning techniques in order to build accurate next location prediction models using a light-weight architecture that can significantly reduce the inference time.Our goal is not to add one more model in the mobility literature. Instead, we believe that it is of paramount importance to present how we can manage, specialize, and enhance well-trained mobility predictors. In addition to this, we take into consideration the ever-generating mobility data, the limited resources of the devices that run the models and we focus on how we can reduce their computational requirements. We have tried three variations on how we use knowledge distillation, namely distilled agent, double-distilled agent and pre-distilled agent with the latter having an overall improvement of 6.57% in the distance errors compared with a state-of-the-art next location prediction that does not use knowledge distillation and 99.8% reduction in inference time on edge devices with the utilization of light-weight Machine Learning frameworks such as, TensorFlow Lite.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call