Abstract

The current practice of adjusting hearing aids (HA) is tiring and time-consuming for both patients and audiologists. Of hearing-impaired people, 40–50% are not satisfied with their HAs. In addition, good designs of HAs are often avoided since the process of fitting them is exhausting. To improve the fitting process, a machine learning (ML) unsupervised approach is proposed to cluster the pure-tone audiograms (PTA). This work applies the spectral clustering (SP) approach to group audiograms according to their similarity in shape. Different SP approaches are tested for best results and these approaches were evaluated by Silhouette, Calinski-Harabasz, and Davies-Bouldin criteria values. Kutools for Excel add-in is used to generate audiograms’ population, annotated using the results from SP, and different criteria values are used to evaluate population clusters. Finally, these clusters are mapped to a standard set of audiograms used in HA characterization. The results indicated that grouping the data in 8 groups or 10 results in ones with high evaluation criteria. The evaluation for population audiograms clusters shows good performance, as it resulted in a Silhouette coefficient >0.5. This work introduces a new concept to classify audiograms using an ML algorithm according to the audiograms’ similarity in shape.

Highlights

  • Introduction and Motivation The World HealthOrganization (WHO) estimates that by 2050, nearly 2.5 billion people are projected to have some degree of hearing loss, which poses an annual global cost of US $980 billion [1]

  • Audiograms of similar shape at different levels can be realized by a group of filters by changing the gain coefficients of each filter or the overall gain of the cascaded filters. This classification will help hearing aid designers to reduce the complexity of their filter designs and can be a good start for the future supervised learning algorithm to classify audiograms according to these detected shapes

  • Authors picked the number of clusters with the highest 2 silhouette coefficients for further evaluation. These two numbers of clusters were compared by evaluating Silhouette coefficients, Calinski-Harabasz criterion, and Davies-Bouldin criterion. This was followed by generating audiogram population and annotating them according to the produced clusters and these clusters were evaluated with the same three criteria methods Silhouette coefficients, Calinski-Harabasz criterion, and Davies-Bouldin criterion

Read more

Summary

Related Work

In 2016, Rahne, et al, have built an excel sheet as an audiogram classifier with pre-set inputs that can be defined according to inclusion criteria in the clinical trial. The clustered data was prepared to be a good training set for supervised machine learning classifiers. In 2020, the same group used the data preparation procedure carried out by them to produce a machine learning classifier They applied supervised ML to 270 audiograms annotated by three experts in the field. Image rotation, wrapping, contrast, lighting and zoom were applied to the audiogram images in the training set They achieved 97.5% accuracy of their model to classify hearing loss types based on features extraction of the audiograms [13]. A. It is a classifier to audiograms with two steps; the first step is unsupervised learning to cluster audiograms into 4 pre-set configurations for a hearing aid. The audiograms are classified to categorize hearing loss into 4 classes; normal hearing, sensorineural, conductive, and mixed hearing loss

Limitations
Data Clustering Algorithm
Algorithm Description
Clustering Implementation
Clustering Performance Evaluation
First Data Set
Second Data Set
Results and Discussion
Finding the Optimum Number of Clusters
Eight Clusters Evaluation Criteria
Ten Clusters Evaluation Criteria
Mapping Bisgaard Standard Levels to the Implemented Clusters
Summary and Conclusions
Clusters Stage 2
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.