Abstract

The potential of using speakers as a sensor to detect ear canal conditions was demonstrated previously. This research contains our ongoing and continuous effort to utilize a single speaker as a sensor by measuring electrical impedance varying acoustic loads. Electrical impedance data (magnitude and phase) from six different acoustic load conditions were collected as features for machine learning (ML) model training. To enhance the learning performance, the data were pre-processed and augmented with normalization and level-shifting techniques, respectively. The raw data were converted to images to optimize the learning performance to classify acoustic loads from the impedance measurement. Several forms of images were experimented such as magnitude only, overlapped magnitude and phase, and rectangular form. A total of 2100 data (350 each) were used with CNN-based State of the Art (SOTA) models such as AlexNet, ResNet, and DenseNet. Both binary and multiclass classifications were performed, showing 0.9716 average accuracy and 0.907 accuracy, respectively. This innovative single-speaker approach using impedance as ML features is poised to revolutionize traditional acoustic sensing research by harnessing the limitless power of AI.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call