Abstract

Vertebrates separate audition into peripheral, brain-stem, and fore-brain stages. These stages are anatomically and theoretically separable, and our understanding of hearing has improved by analyzing them as such. Here, I outline a strategy for structuring acoustic feature extraction that is inspired by the tiered structure of the brain. The system aims to work online in close to real time, using a modest amount of memory and processing power. Another goal is to maintain a domain-appropriate conceptual connection between the features that we are measuring and the physical and/or biological processes that we are monitoring. We do this by breaking feature extraction into stages based on context. Some features will be useful across many contexts, or useful for distinguishing among contexts. We should extract those features at each time step of our analysis. Other features will only be relevant within specific contexts, so we should only spend computational resources on extracting them when they are relevant. This tiered computational strategy yields multiple points of introspection about the quantitative relationships between continuous features and discrete contexts. We can use quantitative introspection to analyze how well our modules generalize to other contexts, and to validate empirical models of how those contexts work.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.