Abstract
Abstract In recent years, artificial intelligence technology has received widespread attention in the field of artistic creation. This paper proposes the use of intelligent software to create a musical work that matches the emotions of the audience when they watch a musical performance so as to mobilize the development of the musical plot and to involve the virtual digital man in the performance of the musical to increase the interactivity of the characters. Both the music automatic generation software and the application of the virtual digital person need to match the rhythm of the performance. Accordingly, this paper proposes a multimodal emotion recognition model based on deep learning, which is applied to recognize the audience’s emotions in real time to create music that matches the plot development and reasonable role interaction. Through simulation experiments, the average recognition rate of multimodal based on decision layer fusion is 84.8%, which is slightly higher than the average recognition rate based on feature fusion and much higher than the average recognition rate of single-modal speech emotion (67.5%) and face expression (78.5%). 76% and 73% of the audience members liked the “music” and “character” elements of the musical performance with AI technology, and 62% of the audience members expressed their desire to watch it again.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.