Abstract
A Bayesian dynamic model is developed to model complex sequential data, with a focus on audio signals from music. The music is represented in terms of a sequence of discrete observations, and the sequence is modeled using a hidden Markov model (HMM) with time-evolving parameters. The model imposes the belief that observations that are temporally proximate are more likely to be drawn from HMMs with similar parameters, while also allowing for “innovation” associated with abrupt changes in the music texture. Segmentation of a given musical piece is constituted via the model inference and the results are compared with other models and also to a conventional music-theoretic analysis.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.