Abstract

The rapid advancement of communication and information technology has led to the expansion and blossoming of digital music. Recently, music feature extraction and classification have emerged as a research hotspot due to the difficulty of quickly and accurately retrieving the music that consumers are looking for from a large volume of music repositories. Traditional approaches to music classification rely heavily on a wide variety of synthetically produced aural features. In this research, we propose a novel approach to selecting the musical genre from user playlists by using a classification and feature selection machine learning model. To filter, normalise, and eliminate missing variables, we collect information on the playlist’s music genre and user history. The characteristics of this data are then selected using a convolutional belief transfer Gaussian model (CBTG) and a fuzzy recurrent adversarial encoder neural network (FRAENN). The experimental examination of a number of music genre selection datasets includes measures of training accuracy, mean average precision, F-1 score, root mean squared error (RMSE), and area under the curve (AUC). Results show that this model can both create a respectable classification result and extract valuable feature representation of songs using a wide variety of criteria.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call