Abstract

Music is one of the important forms of human emotion expression. Using artificial methods and technologies to allow the machine to quickly identify optical scores and real-time music sounds, through the inference and optimization of music emotion models, obtain human music emotion expression patterns, and actively complete human-machine collaboration with intelligent composition, and other related services. The development of multi-source perception of affective artificial intelligence has important research value and practical significance. This article mainly integrates multiple models of existing artificial intelligence models (typed music models, Markov chain models, genetic algorithm models, neural network models, etc.) to improve the rule knowledge to form a practical and effective hybrid model. Its innovation lies in combining formulas and matrix data combinations to intelligently compose non-note units (reproducing structures) in automatic composition, so as to achieve harmony, audibility and intelligence of music.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call