This research introduces a novel music recommendation system leveraging deep learning techniques to tackle significant challenges in traditional recommendation methods, such as the cold start problem, limited recommendation diversity, and difficulty in adapting to evolving user preferences. The proposed model employs Convolutional Neural Networks (CNNs) for genre recognition, coupled with Harmonic-Percussive Source Separation (HPSS) to extract rich audio features, capturing intricate musical distinctions across genres. These features, combined with user interaction data, enable the model to deliver highly personalized recommendations based on individual listening habits. Experimental results show that the system significantly outperforms conventional approaches, with a genre classification accuracy of 92%, offering greater recommendation accuracy and diversity. This marks a substantial improvement over traditional collaborative filtering and content-based methods, which struggle to deliver relevant suggestions in dynamic user environments. The findings highlight that deep learning, particularly CNNs, can effectively overcome data sparsity issues and provide more adaptive, user-centered recommendations. Moreover, the system's ability to integrate real-time user interaction data leads to enhanced user engagement, as the recommendations become more relevant and aligned with individual preferences. Future work will explore enhancing the dataset's diversity and optimizing computational efficiency to support scalability, ensuring the model can be applied across different cultures and regions. By improving the model's adaptability and efficiency, this research aims to create a more inclusive and scalable music recommendation system, capable of catering to global audiences with diverse musical tastes. Ultimately, the proposed system contributes to the development of more accurate, personalized, and engaging music recommendation frameworks, marking a significant advancement in the field of music information retrieval.
Read full abstract