Abstract

In this paper we present Active Inference-Based Design Agent (AIDA), which is an active inference-based agent that iteratively designs a personalized audio processing algorithm through situated interactions with a human client. The target application of AIDA is to propose on-the-spot the most interesting alternative values for the tuning parameters of a hearing aid (HA) algorithm, whenever a HA client is not satisfied with their HA performance. AIDA interprets searching for the “most interesting alternative” as an issue of optimal (acoustic) context-aware Bayesian trial design. In computational terms, AIDA is realized as an active inference-based agent with an Expected Free Energy criterion for trial design. This type of architecture is inspired by neuro-economic models on efficient (Bayesian) trial design in brains and implies that AIDA comprises generative probabilistic models for acoustic signals and user responses. We propose a novel generative model for acoustic signals as a sum of time-varying auto-regressive filters and a user response model based on a Gaussian Process Classifier. The full AIDA agent has been implemented in a factor graph for the generative model and all tasks (parameter learning, acoustic context classification, trial design, etc.) are realized by variational message passing on the factor graph. All verification and validation experiments and demonstrations are freely accessible at our GitHub repository.

Highlights

  • Hearing aids (HA) are often equipped with specialized noise reduction algorithms

  • This paper has presented Active Inference-Based Design Agent (AIDA), an active inference design agent for novel situation-aware personalized hearing aid algorithms

  • AIDA and the corresponding hearing aid algorithm are based on probabilistic generative models that model the user and the underlying speech and context-dependent background noise signals of the observed acoustic signal, respectively

Read more

Summary

Introduction

Hearing aids (HA) are often equipped with specialized noise reduction algorithms. These algorithms are developed by teams of engineers who aim to create a single optimal algorithm that suits any user in any situation. Modeling all possible acoustic environments is infeasible. A single static HA algorithm cannot possibly account for all eventualities—even without taking into account the particular constraints imposed by the HA itself, such as limited computational power and allowed processing delays (Kates and Arehart, 2005). Hearing loss is highly personal and can differ significantly between users. Each HA user requires their own, individually tuned HA algorithm that compensates for their unique hearing loss profile (Nielsen et al, 2015; van de Laar and de Vries, 2016; Alamdari et al, 2020) and satisfies their personal preferences for parameter settings

Objectives
Findings
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.