Abstract
Next point-of-interest (POI) recommendation provides users with location suggestions that they may be interested in, allowing them to explore their surroundings. Existing sequence-based or graph-based POI recommendation methods have matured in capturing spatiotemporal information; however, POI recommendation methods based on large language models (LLMs) focus more on capturing sequential transition relationships. This raises an unexplored challenge: how to leverage LLMs to better capture geographic contextual information. To address this, we propose interpretable embeddings for next point-of-interest recommendation via large language model question–answering, named QA-POI, which transforms the POI recommendation task into obtaining interpretable embeddings via LLM prompts, followed by lightweight MLP fine-tuning. We introduce question–answer embeddings, which are generated by asking LLMs yes/no questions about the user’s trajectory sequence. By asking spatiotemporal questions about the trajectory sequence, we aim to extract as much spatiotemporal information from the LLM as possible. During training, QA-POI iteratively selects the most valuable subset of potential questions from a set of questions to prompt the LLM for the next POI recommendation. It is then fine-tuned for the next POI recommendation task using a lightweight Multi-Layer Perceptron (MLP). Extensive experiments on two datasets demonstrate the effectiveness of our approach.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have