Abstract
The need for the recognition of music emotion has become apparent in many music information retrieval applications. In addition to the large pool of techniques that have already been developed in machine learning and data mining, various emerging applications have led to a wealth of newly proposed techniques. In the music information retrieval community, many studies and applications have concentrated on tag-based music recommendation. The limitation of music emotion tags is the ambiguity caused by a single music tag covering too many subcategories. To overcome this, multiple tags can be used simultaneously to specify music clips more precisely. In this paper, we propose a novel technique to rank the proper tag combinations based on the acoustic similarity of music clips.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: The International Journal of Fuzzy Logic and Intelligent Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.