Abstract

Current music recommendation systems can explore the general relationship between the users and songs to recommend music to the users; however, they cannot distinguish the different preferences of different users for the same song. For example, a user may like a song because of the singer, while another user will like it not for the singer but just because of the composition of the song or its melody. A recommender system that knows this difference would be more effective in recommending music to the users. To this end, this paper proposes a music recommendation model based on multilayer attention representation, which learns song representations from multidimensions using user-attribute information and song content information, and mines the preference relationship between users and songs. In order to distinguish the differences in user preferences for multidomain features of songs, a feature-dependent attention network is designed; in order to distinguish the differences in user preferences for different historical behaviors and to explore the temporal dependence of user behaviors, a song-dependent attention network is designed. Finally, the SoftMax function is used to calculate the distribution of users’ preferences for candidate songs and is used to generate recommendations. The experimental results on 30Music and MIGU datasets show that the proposed model achieves significant improvement in recall and MRR compared with the current recommendation models.

Highlights

  • In recent years, domestic and foreign researchers have proposed hybrid recommendation algorithms based on session and context [5, 6] to fully exploit the influence of unknown knowledge on music preferences and improve recommendation quality. ese methods only consider the user’s contextual information and temporal relationships in the current session at the global level and lack the understanding of fine-grained features

  • This paper proposes a recommendation model (HARM) based on multilayer attention representation, which uses user-attribute information and song content information to learn the embedded representation of songs from multiple dimensions and mine the user’s preference features for songs

  • E main contributions of the paper include the following: We propose a song recommendation model based on multilayer attention representation which is comprised of three parts

Read more

Summary

Related Work

Both collaborative and content filtering-based models rely on behavioral data of user-item interactions, which leads these models to focus only on the long-term static preferences of users. ere are extremely strong correlations and causality in the user’s behavioral sequences. To solve the computational complexity problem under multiple behaviors, Li et al [9] proposed a state-space model based on the sequence of item attributes to alleviate the problem of state-space explosion; Kolivand et al [10] proposed a temporal recommendation model based on matrix decomposition to generate the recommendation item based on the interaction between session and candidate items; Gao et al [11] proposed a FISM model that performs matrix decomposition of the item-item colinear matrix without learning explicit user representations These Markov models based on matrix decomposition only consider the action relations in lower-order cases and cannot solve the interactions in higher-order cases, and at the same time, they ignore the behavioral order dependencies within and between sessions. Sun [16] proposed VideoReach, a video recommendation system by finding a list of relevant videos in terms of relevance such as textual, visual and aural relevance, and user click through

Recommendation Model Based on Multilayer Attention Representation
Experiments
Experimental Results and Analysis
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call