Abstract

Music is a medium for emotional artistic expression. Different people have different understandings of music. Music emotion recognition (MER) has thus become a novel branch in computer music. The goal of this essay is to investigate in the performance of established CNN architectures, such as AlexNet and VGG16, to recognize emotions contained in a song. CAL500 dataset is used as it covers a variety of genres. The dataset is transformed to spectrograms, which can be understood by computers through image recognition. The result of this investigation turned out to be that previous architectures would lead to overfitting within the training of a few batches. Possible explanations for this are that the parameters used in the model are too large for a simple regression task. This research provides some understanding of how CNN works as a network initially designed for image classification. Understanding emotions using spectrograms might require less complex CNN models or new models that are specialized in such tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call