Abstract

In order to address two main limitations of current content-based music recommendation approaches, an ordinal regression algorithm for music recommendation that incorporates dynamic information is presented. Instead of assuming that local spectral features within a song are identically and independently distributed examples of an underlying probability density, music is characterized by a vocabulary of acoustic segment models (ASMs), which are found with an unsupervised process. Further, instead of classifying music based on subjective classes, such as genre, or trying to find a universal notion of similarity, songs are classified based on personal preference ratings. The ordinal regression approach to perform the ratings prediction is based on the discriminative-training algorithm known as minimum classification error (MCE) training. Experimental results indicate that improved temporal modeling leads to superior performance over standard spectral-based music representations. Further, the MCE-based preference ratings algorithm is shown to be superior over two other systems. Analysis demonstrates that the superior performance is due to MCE being a non-conservative algorithm that demonstrates immunity to outliers.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.