Abstract

This article studies the problem of learning effective representations for Location-Based Social Networks (LBSN), which is useful in many tasks such as location recommendation and link prediction. Existing network embedding methods mainly focus on capturing topology patterns reflected in social connections, while check-in sequences, the most important data type in LBSNs, are not directly modeled by these models. In this article, we propose a representation learning method for LBSNs called as JRLM++, which models check-in sequences together with social connections. To capture sequential relatedness, JRLM++ characterizes two levels of sequential contexts, namely fine-grained and coarse-grained contexts. We present a learning algorithm tailored to the hierarchical architecture of the proposed model. We conduct extensive experiments on two important applications using real-world datasets. The experimental results demonstrate the superiority of our model. The proposed model can generate effective representations for both users and locations in the same embedding space, which can be further utilized to improve multiple LBSN tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call