Strong emotions can be expressed and evoked via music. However, accurately recognizing the emotions in music using computational models is extremely challenging. When the music portions convey various complex emotions, the problem’s difficulty can rise significantly. In this work, a systematic strategy that blends multiple cutting-edge techniques is used to focus on the emotional classification of music. The Kaggle-sourced music emotion dataset provides a wide range of audio samples illustrating various emotional expressions in music. Spectral subtraction as a noise reduction approach is used to improve the quality of the analysis. This efficiently reduced unwanted background noise and improved the audio signals’ clarity. A Quantum Convolutional Neural Network (QCNN) is used for feature extraction, taking advantage of its special capacity to extract complex patterns and characteristics from the music data that are essential for emotion recognition. To provide accurate predictions, the training procedure is optimized and finally utilized Flexible Runge Kutta Optimized Multilayer Perceptrons (FRKO-MLP) for emotion classification. The goal of this combination of advanced classification approaches, quantum feature extraction and noise reduction is to increase the robustness and accuracy of music emotion identification systems. Based on the chosen datasets, the results of the experimental study demonstrate that the suggested model greatly increases the efficiency and accuracy of emotion classification in music. The proposed FRKO-MLP attains 93.21% accuracy, 90.52% precision, 87.34% sensitivity and 89.72% F1-score. These findings show that the model is far better at identifying emotions than traditional methods.
Read full abstract