Event Abstract Back to Event A Nanobots-based Methodology for Assessing the Acoustic Signal's Attenuation in the Auditory System's Pathway Maria Kalogeropoulou1, Panagiotis Katrakazas2*, Kostas Giokas2 and Dimitrios-Dionysios Koutsouris2 1 National Technical University of Athens, School of Electrical and Computer Engineering, Greece 2 National Technical University of Athens, School of Electrical and Computer Engineering, Greece Introduction: Our auditory system is designed to convert acoustic information from low-level sensory representations into perceptual representations. A full understanding of the mechanisms responsible for auditory streaming requires a straight comparison of neural activity and behavioral reports of the auditory perception. [1] During hearing loss however, the pathways carrying audio information from the cochlea to the auditory cortex are altered resulting from blockage or damage to peripheral auditory structures. [2] As it is important to determine the degree of hearing loss and how it is dependent upon the acoustics of the stimuli, experience, or task demands, we propose a methodology of assessing the attenuation of the acoustic signal by using several nanobots agents placed along the signal’s pathway and determining the acoustic reflex. Proposed methodology: As the acoustic reflex is a contraction of the middle-ear muscles induced by an intense auditory incitement, stimulation on either the ipsi- or the contralateral side should result in bilateral muscle contraction in a normal hearing system. However, in a case of sensorineural hearing loss, the acoustic reflex threshold is shown to be reduced by at least two different factors: the degree of synchrony of neural activity across frequency, and the fast-acting compression mechanism in the cochlea. [3] In our study, we aim to check via simulation the attenuation of the acoustic signal inside the ear channels, as the signal is still considered as "sound". This will be realized via the use of four (4) micro-sensors placed on the facial nerve (CN7) and vestibulocochlear nerve (CN8), which will obtain the corresponding electrical signal in order to identify the problem, in coordination with the nanobots agents. If a nerve is found to be problematic, a simple search method can be applied (e.g. the bisection method) to control the subinterval where the attenuating signal exists. In that case, the nanobots agents will move alongside the nerve to locate the problematic area. The aforementioned setup will be implemented via a computer-aided engineering software (e.g. Solidworks) with the use of the detailed anatomical model of the human head, MIDA v1.0 [4]. The navigation algorithm used in nanobots is described in our previous work [5], where acoustic waves will be used to minimally affect brain tissues. The anatomical parameters for the CN7 and CN8 nerves are taken from [6], defining the design parameters of the nanobots agents as well as their position alongside the nerves and the tympanic membrane. To verify the acoustic signal attenuation, probe microphone measurements (PMM) will be initially performed to assess the level of signal a hearing-loss patient receives near his/her tympanic membrane. The purpose of PMM is to ensure that appropriate gain is received by a person with hearing loss, where accurate hearing thresholds are converted from the audiogram measurement (dB HL) to the measurement of a hearing aid output (dB SPL). [7] That signal will be also received by the nanobots alongside the nerves, which will be used as a reference signal, while at the same time they will record the signal’s path alongside the nerves. The difference between a real-time measurement of the signal and the reference one will provide us with the level of attenuation that occurs throughout the acoustic path. That will give us a better insight to where the hearing loss occurs and provide us with a more holistic view of the acoustic signal’s path in order to better evaluate the levels of hearing loss and adjust hearing aid modifications, if necessary, so as to enhance the acoustic experience of the user. Figure 1 Acknowledgements This project has received funding from the European Union’s Horizon 2020 MSCA RISE Action PROPHETIC, under grant agreement No 644704.
Read full abstract