There are many reasons people listen to music, and the type of music is largely determined by what the listener may be doing while they listen. For example, one may listen to one type of music while commuting, another while exercising, and yet another while relaxing. Without access to the physiological state of the user, current music recommendation methods rely on collaborative filtering - recommending music based on what other similar users listen to - and content based filtering - recommending songs based on their similarities to songs the user already prefers. With the rise in popularity of smart devices and activity trackers, physiological context can be a new channel to inform music recommendations. We propose deep learning solutions for context aware recommendation and playlist generation. Specifically, we use variational autoencoders (VAEs) to create a song embedding. We then explore multi-task multi-layer perceptrons (MLPs) and Gaussian mixture models to recommend songs based on context. We generate artificial user data to train and test our models in online learning and supervised learning settings.
Read full abstract