Abstract

Computational modeling of auditory scene analysis (ASA) offers a new paradigm for experimentation. It permits a novel approach to the development of theories of grouping, and to the design of experimental stimuli. For example: (i) grouping algorithms can be implemented, and validated against experimental data; (ii) experimental data can be analyzed to suggest a possible representation in the auditory system, and to test conformance with expectations; (iii) computational implementation can expose deficiencies in current theory. Over the last 4 years, the Sheffield Auditory Group has developed a rich set of representations used for investigating computational ASA [G. J. Brown, ‘‘Computational Auditory Scene Analysis: A Representational Approach,’’ Ph.D. thesis, University of Sheffield (1992); M. P. Cooke, ModellingAuditoryProcessingandOrganisation (Cambridge U.P., Cambridge, UK, 1993)]. There representations include computational maps for onsets, offsets, frequency transitions, and periodicities, in addition to higher-level symbolic representations of acoustic components. Recently, an environment has been created that brings together this diverse collection into a uniform framework for display, resynthesis, and experimentation. The environment supports experimental investigation and allows the ‘‘debugging’’ of stimulus selection. Further, it acts as a canvas onto which the results of auditory grouping can be drawn. It also serves as a tutorial in this increasingly complex field. The practical application of these points is illustrated in a case study that maps the path from stimulus generation to grouping by listeners or machine.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call