Abstract

Abstract Combining images with music is a music visualization to deepen the knowledge and understanding of music information. This study briefly introduced the concept of music visualization and used a convolutional neural network and long short-term memory to pair music and images for music visualization. Then, an emotion classification loss function was added to the loss function to make full use of the emotional information in music and images. Finally, simulation experiments were performed. The results showed that the improved deep learning-based music visualization algorithm had the highest matching accuracy when the weight of the emotion classification loss function was 0.2; compared with the traditional keyword matching method and the nonimproved deep learning music visualization algorithm, the improved algorithm matched more suitable images.

Highlights

  • Music is an acoustic way of expressing emotional thoughts, and it is an art form

  • Li and Li [5] have constructed a music visualization model based on graphic images and mathematical statistics by combining mathematical, statistical methods such as K-mean clustering and fusion decision trees based on music graphic images to address the current shortcomings in the field of music visualization

  • The loss function used for supervising training in the deep learning-based music visualization algorithm was improved to make full use of emotional information in the periods and images

Read more

Summary

Introduction

Music is an acoustic way of expressing emotional thoughts, and it is an art form. Music uses changes in the rhythm and pitch of sounds to convey information. When people receive such information, they can appreciate the rhythm and melody, and feel the change of emotion [1]. Music visualization can connect people’s auditory and visual senses, so that people can feel the information contained in music more intuitively [2]. Plewa and Kostek [4] proposed a graphical representation method of song emotions based on self-organizing mapping and created a map in which music excerpts with similar moods were organized next to each other on the two-dimensional display. Lopez-Rincon and Starostenko [6] proposed a method to normalize data in musical instrument digital interface files by 12-dimensional

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call