Abstract

In this paper, we present an automated music video generation framework that utilizes emotion synchronization between video and music. After a user uploads a video or music, the framework segments the video and music, and then predicts the emotion of each of the segments. The preprocessing result is stored on the server's database. The user can select a set of videos and music from the database, and the framework will generate a music video. The system finds the most closely associated video segment with the music segment by comparing certain low level features and the emotion differences. We compare our work to a similar music video generation method by performing a user preference study, and show that our method generates a preferable result.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.