Abstract

Spectral models attempt to parametrize sound at the basilar membrane of the ear, thus permitting transformations closely linked to the perception. However, for high-quality real-time applications, these models require methods for a precise analysis and an efficient synthesis. When dealing with musical sound, that is, a polyphonic mix of non-stationary complex sounds, the main challenge is to extract these different sounds present in the musical mix. This can be done using (psycho) acoustical knowledge about the sound sources (computational auditory scene analysis approach). However, the quality is often not sufficient. When access to the compositional process is given, another option is to use some bits of this ground truth as additional information (informed analysis approach). This more precise analysis allows deeper transformations, and using psycho-acoustical considerations, efficient data structures, and algorithms, it is possible to re-synthesize the sounds from the model parameters in real time and with a high quality. This opens up new impressive applications, such as “active listening”, enabling the listener to interact with the sound while it is played. The musical parameters (e.g., loudness or spatial location) of the sound sources present in the musical mix can now be changed interactively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.