Abstract

BackgroundNeural coding of sound information is often studied through frequency tuning curve (FTC), spectro-temporal receptive field (STRF), post-stimulus time histogram (PSTH), and other methods such as rate functions. These methods, despite providing a robust characterization of auditory responses in their specific domains, lack a complete description in terms of three sound fundamentals: frequency, amplitude, and time. New MethodUsing the techniques of electrophysiology, neural signal processing and medical image processing, a standalone method is created to illustrate the neural processing of three sound fundamentals in one representation. ResultsThe new method comprehensively showed frequency tuning, intensity tuning, time tuning as well as a novel representation of frequency and time dependent intensity coding. It provides most of the necessary parameters that are used to quantify neural response properties, such as minimum threshold (MT), frequency tuning, latency, best frequency (BF), characteristic frequency (CF), bandwidth (BW), etc. Comparison with Existing MethodsOur method shows neural responses as a function of all three sound fundamentals in a single representation that was not possible in previous methods. It covers many functions of conventional methods and allow extracting novel information such as the intensity coding as the function of the spectrotemporal response area of auditory neurons. ConclusionThis method can be used as a standalone package to study auditory neural responses and evaluate the performance of different hearing related devices such as cochlear implants and hearing aids in animal models as well as study and compare auditory processing in aged and hearing impaired animal models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call