Abstract

Massive music data and diverse listening behaviors have caused great difficulties for existing methods in user-personalized recommendation scenarios. Most previous music recommendation models extract features from temporal relationships among sequential listening records and ignore the utilization of additional information, such as music’s singer and album. Especially, a piece of music is commonly created by a specific musician and belongs to a particular album. Singer and album information, regarded as music metadata, can be utilized as important auxiliary information among different music pieces and may considerably influence the user’s choices of music. In this paper, we focus on the music sequential recommendation task with the consideration of the additional information and propose a novel Graph-based Attentive Sequential model with Metadata (GASM), which incorporates metadata to enrich music representations and effectively mine the user’s listening behavior patterns. Specifically, we first use a directed listening graph to model the relations between various kinds of nodes (user, music, singer, album) and then adopt the graph neural networks to learn their latent representation vectors. After that, we decompose the user’s preference for music into long-term, short-term and dynamic components with personalized attention networks. Finally, GASM integrates three types of preferences to predict the next (new) music in accordance with the user’s music taste. Extensive experiments have been conducted on three real-world datasets, and the results show that the proposed method GASM achieves better performance than baselines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call