Abstract

In many acoustic signal processing applications human listeners are able to outperform automated processing techniques, particularly in the identification and classification of acoustic events. This paper develops a framework for employing perceptual information from human listening experiments to improve automatic classification of active sonar signals. We focus on the identification of new signal features that are able to predict the human performance observed in formal listening experiments. Using this framework, our newly identified features have the ability to elevate automatic classification performance closer to the level of human listeners. We develop several new methods for learning a perceptual feature transform from human similarity measures. In addition to providing a more fundamental basis for uncovering perceptual features than previous approaches, these methods also lead to a greater insight into how humans perceive sounds in a dataset. We also develop a new approach for learning a perceptual distance metric. This metric is shown to be applicable to modern kernel‐based techniques used in machine learning and provides a connection between the fields of psychoacoustics and machine learning.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.