Abstract

Endowing machines with sensing capabilities similar to those of humans is a prevalent quest in engineering and computer science. In the pursuit of making computers sense their surroundings, a huge effort has been conducted to allow machines and computers to acquire, process, analyze and understand their environment in a human-like way. Focusing on the sense of hearing, the ability of computers to sense their acoustic environment as humans do goes by the name of machine hearing. To achieve this ambitious aim, the representation of the audio signal is of paramount importance. In this paper, we present an up-to-date review of the most relevant audio feature extraction techniques developed to analyze the most usual audio signals: speech, music and environmental sounds. Besides revisiting classic approaches for completeness, we include the latest advances in the field based on new domains of analysis together with novel bio-inspired proposals. These approaches are described following a taxonomy that organizes them according to their physical or perceptual basis, being subsequently divided depending on the domain of computation (time, frequency, wavelet, image-based, cepstral, or other domains). The description of the approaches is accompanied with recent examples of their application to machine hearing related problems.

Highlights

  • Endowing machines with sensing capabilities similar to those of humans is a long pursued goal in several engineering and computer science disciplines.Ideally, we would like machines and computers to be aware of their immediate surroundings as human beings are

  • As defined by Mitrović et al [17], this feature is a two-dimensional representation of acoustic versus modulation frequency that is built upon a specific loudness sensation, and it is obtained by Fourier analysis of the critical bands over time and incorporating a weighting stage that is inspired by the human auditory system

  • This work has presented an up-to-date review of the most relevant audio feature extraction techniques related to machine hearing which have been developed for the analysis of speech, music and environmental sounds

Read more

Summary

Introduction

Endowing machines with sensing capabilities similar to those of humans (such as vision, hearing, touch, smell and taste) is a long pursued goal in several engineering and computer science disciplines. As the reader may have deduced, machine hearing is an extremely complex and daunting task given the wide diversity of possible audio inputs and application scenarios For this reason, it is typically subdivided into smaller subproblems, and most research efforts are focused on solving simpler, more specific tasks. Other kind of sound sources coming from our environment (e.g., traffic noise, sounds from animals in the nature, etc.) do not exhibit such particularities, or at least not in such in a clear way These non-speech nor music related sounds (hereafter denoted as environmental sounds) should be detectable and recognizable by hearing machines as individual events (Chu et al [14]). Given the importance of relating the nature of the signal with the type of extracted features, we detail the primary characteristics of the three most frequent types of signals involved in machine hearing applications: speech, music and environmental sounds.

Machine Hearing
Architecture of Machine Hearing Systems
Audio Features Taxonomy and Review of Extraction Techniques
Time Domain Physical Features
Zero-Crossing Rate-Based Physical Features
Amplitude-Based Features
Power-Based Features
Rhythm-Based Physical Features
Frequency Domain Physical Features
Autoregression-Based Frequency Features
STFT-Based Frequency Features
Brightness-Related Physical Frequency Features
Tonality-Related Physical Frequency Features
Chroma-Related Physical Frequency Features
Spectrum Shape-Related Physical Frequency Features
Wavelet-Based Physical Features
Image Domain Physical Features
Cepstral Domain Physical Features
Other Domains
Perceptual Audio Features Extraction Techniques
Zero-Crossing Rate-Based Perceptual Features
Perceptual Autocorrelation-Based Features
Rhythm Pattern
Frequency Domain Perceptual Features
Modulation-Based Perceptual Frequency Features
Brightness-Related Perceptual Frequency Features
Tonality-Related Perceptual Frequency Features
Loudness-Related Perceptual Frequency Features
Roughness-Related Perceptual Frequency Features
Wavelet-Based Perceptual Features
Multiscale Spectro-Temporal-Based Perceptual Features
Image Domain Perceptual Features
Cepstral Domain Perceptual Features
Perceptual Filter Banks-Based Cepstral Features
Autoregression-Based Cepstral Features
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.