Abstract

Human communication includes the capability of recognition. This is particularly true of auditory communication. Music information retrieval (MIR) turns out to be particularly challenging, since many problems remain still unsolved. Topics that should be included within the scope of MIR are automatic classification of musical instruments/phrases/styles, music representation and indexing, estimating musical similarity using both perceptual and musicological criteria, recognizing music using audio and/or semantic description, language modeling for music, auditory scene analysis, and others. Many features of music content description are based on perceptual phenomena and cognition. However, it can easily be observed that most of the low-level descriptors used, for example, in musical instrument classification are more data- than human-oriented. This is because the idea behind these features is to have data defined and linked in such a way as to be able to use it for more effective automatic discovery, integration, and reuse in various applications. The ambitious task is, however, to provide seamless meaning to low- and high-level descriptors such as timbre descriptors and linking them together. In such a way data can be processed and shared by both systems and people. This paper presents a study related to timbre representation of musical instrument sounds.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.