Event Abstract Back to Event Contour representation of sound signals Yoonseob Lim1*, Barbara G. Shinn-Cunningham1 and Timothy Gardner2 1 Boston University, Department of Cognitive and Neural Systems, United States 2 Boston University, Faculty of Electrical Engineering and Computing, United States Continuous edges, or contours, are powerful features for object recognition, both in neural and machine vision. Similarly, auditory signals are characterized by sharp edges in some classes of time-frequency analysis. Linking these edges to form contours could be relevant for auditory signal processing. However, the mathematical foundations of a general contour representation of sound have not been established. Sinusoidal representations of voiced speech and music have been explored, but these approaches do not represent broadband signals efficiently. Here we construct a two-dimensional contour representation that is generally applicable to any time series, including sound. Starting with the Short Time Fourier Transform (STFT), the method defines edges by coherent phase structure at local points in the time-frequency plane (zero crossings of a complex reassignment matrix). Continuous edges are grouped to form contours that follow the ridges and valleys of the traditional STFT. Local amplitudes are assigned by calculation of fixed points in an iterated reassignment mapping. The representation is additive; the complex amplitudes of the contours can be directly summed to reproduce the original signal. This re-synthesis matches the original signal with a signal-to-noise ratio of 15 dB or higher, even in the challenging case of white noise. In practice, this level of precision provides perceptually equivalent representations of speech and music. For many sounds of interest, a subset of the full contour collection can provide an accurate representation. To find this compact subset, an over-complete set of contours are calculated using multiple filter bandwidths. Contours are then ranked by power, length and curvature, and subject to lateral inhibition from neighboring contours. The top-ranking contours in this distribution provide a sparse representation that emerges without any prior suppositions about the nature of the original signal. By combining contours from multiple bandwidths, the representation achieves high precision in both time and frequency. As such, the method is relevant to a wide range of time-frequency tasks such as constructing receptive fields of auditory neurons, characterizing animal vocalizations, pattern recognition and signal de-noising. We speculate that neural auditory processing involves a similar contour representation. Each stage in the analysis is a plausible operation for neurons: parallel and redundant primary processing in multiple bandwidths, grouping by phase coherence, linking by continuity and lateral inhibition. Conference: Computational and Systems Neuroscience 2010, Salt Lake City, UT, United States, 25 Feb - 2 Mar, 2010. Presentation Type: Poster Presentation Topic: Poster session I Citation: Lim Y, Shinn-Cunningham BG and Gardner T (2010). Contour representation of sound signals. Front. Neurosci. Conference Abstract: Computational and Systems Neuroscience 2010. doi: 10.3389/conf.fnins.2010.03.00133 Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters. The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated. Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed. For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions. Received: 01 Mar 2010; Published Online: 01 Mar 2010. * Correspondence: Yoonseob Lim, Boston University, Department of Cognitive and Neural Systems, Boston, United States, yslim@bu.edu Login Required This action requires you to be registered with Frontiers and logged in. To register or login click here. Abstract Info Abstract The Authors in Frontiers Yoonseob Lim Barbara G Shinn-Cunningham Timothy Gardner Google Yoonseob Lim Barbara G Shinn-Cunningham Timothy Gardner Google Scholar Yoonseob Lim Barbara G Shinn-Cunningham Timothy Gardner PubMed Yoonseob Lim Barbara G Shinn-Cunningham Timothy Gardner Related Article in Frontiers Google Scholar PubMed Abstract Close Back to top Javascript is disabled. Please enable Javascript in your browser settings in order to see all the content on this page.
Read full abstract