Abstract

Online POI recommendation strives to recommend future visitable places to users over time. It is crucial to improve the user experience of location-based social networking applications (e.g., Google Maps, Yelp). While numerous studies focus on capturing user visit preferences, geo-human interactions are ignored. However, users make visit decisions based on the status of geospatial contexts. In the meantime, such visits will change the status of geospatial contexts. Therefore, disregarding such geo-human interactions in streams will result in inferior recommendation performance. To fill this gap, in this paper, we propose a novel deep interactive reinforcement learning framework to model geo-human interactions. Specifically, this framework has two main parts: the representation module and the imitation module. The purpose of the representation module is to capture geo-human interactions and convert them into embedding vectors (state). The imitation module is a reinforced agent whose job is to imitate user visit behavior by recommending next-visit POI (action) based on the state. Imitation performance is regarded as a reward signal to optimize the whole interactive framework. When the model converges, the imitation module can precisely perceive users and geospatial contexts to provide accurate POI recommendations. Finally, we conduct extensive experiments to validate the superiority of our framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call